An Analysis of Bibliometric Indicators, National Institutes of Health Funding, and Faculty Size at Association of American Medical Colleges Medical Schools, 1997-2007
Hendrix, Dean, Journal of the Medical Library Association
Objective: The objective of this study was to analyze bibliometric data from ISI, National Institutes of Health (NIH)-funding data, and faculty size information for Association of American Medical Colleges (AAMC) member schools during 1997 to 2007 to assess research productivity and impact.
Methods: This study gathered and synthesized 10 metrics for almost all AAMC medical schools (n=123): (1) total number of published articles per medical school, (2) total number of citations to published articles per medical school, (3) average number of citations per article, (4) institutional impact indices, (5) institutional percentages of articles with zero citations, (6) annual average number of faculty per medical school, (7) total amount of NIH funding per medical school, (8) average amount of NIH grant money awarded per faculty member, (9) average number of articles per faculty member, and (10) average number of citations per faculty member. Using principal components analysis, the author calculated the relationships between measures, if they existed.
Results: Principal components analysis revealed 3 major clusters of variables that accounted for 91% of the total variance: (1) institutional research productivity, (2) research influence or impact, and (3) individual faculty research productivity. Depending on the variables in each cluster, medical school research may be appropriately evaluated in a more nuanced way. Significant correlations exist between extracted factors, indicating an interrelatedness of all variables. Total NIH funding may relate more strongly to the quality of the research than the quantity of the research. The elimination of medical schools with outliers in 1 or more indicators (n=20) altered the analysis considerably.
Conclusions: Though popular, ordinal rankings cannot adequately describe the multidimensional nature of a medical school's research productivity and impact. This study provides statistics that can be used in conjunction with other sound methodologies to provide a more authentic view of a medical school's research. The large variance of the collected data suggests that refining bibliometric data by discipline, peer groups, or journal information may provide a more precise assessment.
Bibliometric statistics are used by institutions of higher education to evaluate the research quality and productivity of their faculty. On an individual level, tenure, promotion, and reappointment decisions are considerably influenced by bibliometric indicators, such as gross totals of publications and citations and journal impact factors [1-6]. At the departmental, institutional, or national level, bibliometrics inform funding decisions [1, 7, 8], develop benchmarks [1, 9], and identify institutional strengths [1, 10, 11], collaborative research [1, 12], and emerging areas of research [1, 13, 14]. Due to the important organizational and personnel decisions made from these analyses, these statistics and the concomitant rankings elicit controversy. Many scholars denounce the use of ISI's impact factor and immediacy index as well as citation counts in assessing a study's quality and influence. Major criticisms of reliance on bibliometric indicators include manipulation of impact factors by publishers, individual self-citations , uniqueness of disciplinary citation patterns [15, 16], context of a citation , and deficient bibliometric analysis . Many researchers condemn ISI for promoting and promulgating flawed and biased bibliometric data that rely on unsophisticated or limited methodologies [15, 19, 20], exclude the vast majority of the world's journals [15, 19], and contain errors and inconsistency [15, 21]. Conversely, other scholars point out the utility of bibliometric measures, even in light of valid criticisms, and posit that they accurately depict scholarly communication patterns [22-24], correlate with peer-review ratings , predict emerging fields of research , show disciplinary influences , and map various types of collaboration . …