Academic journal article The American Journal of Economics and Sociology

Ranking Economics Departments in a Contested Discipline: A Bibliometric Approach to Quality Equality between Theoretically Distinct Subdisciplines

Academic journal article The American Journal of Economics and Sociology

Ranking Economics Departments in a Contested Discipline: A Bibliometric Approach to Quality Equality between Theoretically Distinct Subdisciplines

Article excerpt

Introduction

Quality ranking of economic journals and departments is a widespread practice in the United States, Europe, Australia, and elsewhere. The methods used are peer review, bibliometric (1) measures, or (in a few cases) an ill-defined combination of the two. (2) Although the methods are subject to various criticisms, they continue to be used because they provide answers of sorts to questions that are continually asked by economists, undergraduate advisors and students, and university administrators, as well as government officials when the disbursement of large sums of monies to universities are involved (Lee 2006, 2009; Moed 2005; Weingart 2005).

The questions take the general form of '

In a recent article on ranking the 129 U.S. economic department programs existing in 2004, Grijalva and Nowell (2008) took a rather unusual bibliometric approach. That is, they first identified the tenure-track or tenure faculty of each department and then secondly identified the journal publications for each faculty member of each department for the period 1985 to 2004 if the journal was listed in the Journal of Economic Literature database Econlit. (3) Next, they selected the impact factors published in the 2004 Social Science Citation Index (SSCI scores) as the quality measure (Q) for each journal. (4) For each article, a weighting (W) was calculated that consisted of the number of pages divided by the number of authors, giving the number of pages per author, which was then divided by the average page length of all the articles in the journal for the period 1985 to 2004. (5) The quality measure was then multiplied by the weighting to yield a productivity value (P)-Q x w = P--which indicated the weighted quality assigned to each article assigned to each author. These weighted productivity values were summed by individual and then by department. The overall productivity values were used to rank the 129 departments in terms of absolute scores and by their average productivity (see Table 2, columns 2 and 4, pages 976-980). Finally, each article was assigned a JEL classification code from which it was possible to rank each department in each JEL "field" by summing the productivity values (see Table 3, pages 981-985 and Table 4, pages 987-994).o

Grijalva and Nowell acknowledged that SSCI impact factor based rankings are open to criticisms, such as the accuracy of the article-author-department combination, that they favor North American, Western European, and English language journals, and others (see Nisonger 2004).7 However, given the domain of their study and the method of collecting the article-author-department data, these usual criticisms are minimized if not irrelevant. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.