Evaluating clinical and translational research (CTR) mentored training programs is challenging because no two programs are alike. Clara Pelfrey, PhD and Kelli Qua, PhD took to the task of comparing bibliometrics between individuals and between programs and have published their results in the Journal of Clinical & Translational Science.
The KL2 program provides mentored-training for early stage CTR investigators. Clinical and Translational Awards (CSTAs) across the country have unique KL2 programs. The evaluation of KL2 programs has begun to incorporate bibliometrics to measure KL2 scholar and program impact. This study investigated demographic differences in bibliometric performance and post-K award funding of KL2 scholars and compared the bibliometric performance and post-K award federal funding of KL2 scholars and other mentored-K awardees at the same institution. Data for this study included SciVal and iCite bibliometrics and NIH RePORTER grant information for mentored-K awardees (K08, K23, and KL2) at ÐÇ¿Õ´«Ã½ (ÐÇ¿Õ´«Ã½) between 2005 and 2013.
Results showed no demographics differences within the KL2 program scholars. Bibliometric differences between KL2 and other mentored-K awardee indicated an initial KL2 advantage for the number of publications at five years post-matriculation (i.e., the start of the K-award). Regression analyses indicated the number of initial publications was a significant predictor of federal grant funding at the same time point. Analysis beyond the five-year post matriculation point did not result in a sustained, significant KL2-advantage.
Factors that contributed to the grant funding advantage need to be determined. Additionally, differences between translational and clinical bibliometrics must be interpreted with caution and appropriate metrics for translational science must be established.
CTSC Program Evaluation integrates data tracking and evaluation for all CTSC research components and training cores, and incorporates a utilization-focused, participatory, and methodologically flexible approach which is based on the CDC Framework for Program Evaluation and the American Evaluation Association Program Evaluation Standards. We are actively involved in all levels of CTSA evaluation, from national to community-based, and are available to consult with investigators on services such as developing data collection instruments, social network analysis data collection, bibliometric analysis, and more.