Academic journal article Atlantic Economic Journal

Methods of Ranking Economics Journals

Academic journal article Atlantic Economic Journal

Methods of Ranking Economics Journals

Article excerpt

Introduction

The annually published pages in economics journals have increased substantially since the 1970s. During this period, interest in ranking economics journals for influence and quality also increased. This has led to the use of a variety of methods for such rankings. One of the early rankings of economic journals, though not the earliest, by Hawkins, Ritter, and Walter [1973] was the Delphi method, used to compute a ranking of 87 journals from the opinions of 160 academic economists. This method is based on opinions and disregards quantitative data. A later paper, also based on the survey method, questioned department heads and editors of journals [Enomoto and Ghosh, 1993] and stratified the data by respondent. Stratification was, for example, by the department head's specialty and degree-granting institution. Another early paper by Bush, Hamelman, and Staaf [1974] used journal citation data to produce what they called a quality index for economic journals. This method counted the number of citations journals received from themselves and others to produce a ranking of 14 journals. By limiting the list to 14, they essentially produced a ranking of core journals in economics. The weakness of this approach is that there was no adjustment for the footprint of the journals being ranked. The footprint can be thought of as the number of articles (or pages, characters, or some other measure of output) published per year. Liebowitz and Palmer [1984] constructed rankings of 108 economics journals making use of the size of the footprint of the journals being ranked and by an impact adjusted ranking. This method was later used by Laband and Pierre [1994]. First, journals were ranked by the number of citations per character published in a period of time. This ranking was further adjusted by assigning a weight to the journal doing the citing. Non-economic journals were assigned a weight of zero and, in effect, were not counted. This produced an impact adjusted ranking that accounted for the influence of a journal within the area of economics. The impact adjustment has the effect of raising the rank of journals (A) that are often cited by other journals, (B) that have a big presence in the profession by virtue of themselves having been often cited by others in the profession, and it lowers the rank of journals (C) that have been cited by other journals that, in turn, have been cited fewer times by other journals in the profession. Their list ignores the influence of journals that are arbitrarily treated as non-economic. This step is problematic in that there are several hundred journals that have economic content that were not included on their list. For example, EconLit indexes several hundred journals that meet its criterion for inclusion. By leaving out more journals that have economic content than journals included on their list, their impact adjustment introduces an element of arbitrariness in the results. Another approach, Data Envelopment Analysis (DEA), by Burton and Phimister [1995] ranked 27 core economics journals. This approach maximizes the ratio of the weighted sum of outputs of journals to the weighted sum of inputs to journals to produce a ranking of journals. The object of the process is to let a computer algorithm remove some of the apparent arbitrariness of the weighting process used by Diamond [1989] in an earlier paper, while producing a ranking scheme that indicates the efficiency of journals in producing citations. The sample weights used were attached to total citations, a proportion of self-citations, and an impact factor in producing a ranking. Weights were also applied to other data in other rankings produced. This approach is influenced by the data chosen to weight. A more recent ranking, based on journal-to-journal citations by Stigler, Stigler, and Friedland [1995], made use of something dubbed the export scores model. This model was used to rank nine core journals in economics. The advantage of this approach is that for a group of journals that have a fairly strong trade in citations (good fit of model to data), the model can produce a set of numbers that measures the propensity of one journal to be cited by another journal, rather than the other way around. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.