Problems in Using the Social Sciences Citation Index to Rank Economics Journals

Article excerpt

I. Introduction

Liebowitz and Palmer (1984) and Laband and Piette (June 1994), in two influential studies, have used the Social Sciences Citation Index (SSCI) Journal Citation Reports to rank economics journals, and measure their relative impact over time. One motivation for doing so is to assess the changing academic journal market in economics. Laband and Piette thus report journal rankings for 1970, 1980, and 1990 by impact-adjusted citations per article - the iterative weighting procedure developed by Liebowitz and Palmer to capture the relative importance of citations in terms of the rank position of the citing journal. They then reason that the changes in the distribution of citations across journals, and associated changes in journal ranks, is the academic community's version of changes in dollar voting by consumers across commodities. A second motivation recognized by Laband and Piette in their study concerned a subsequent use of the Liebowitz and Palmer study: their 1980 journal rankings have been used at many colleges and universities to help evaluate individual scholar's productivity, in order to determine salary increases and make tenure and promotion recommendations. Rather than count the number of publications from 'core' journals an individual had accumulated,(1) the value of their scholarship might better be determined as a weighted sum, where publications in high ranked journals possessed larger weights.

This paper comments on two problems involved in ranking all economics journals according to a single index using SSCI data. One is conceptual in nature, and will be familiar to economists acquainted with index number problems. The other is technical, and pertains to using the SSCI data as a source of information to determine the relative impact of economics journals. The view taken in this paper is that these problems indicate the need for considerable caution in using the existing journal rankings to evaluate scholarly productivity and evaluate economics departments. These problems, however, need not bring into question the industrial organization interpretation the studies considered here adopt toward the economics journals market, and indeed point toward interesting extensions of some of the conclusions reached by Laband and Piette.

II. Problem One: Apples and Oranges

The Social Sciences Citation Index information used in both the Liebowitz and Palmer study and Laband and Piette study is drawn from the SSCI classification "Economics & Business," which as of July 1, 1991 provided information on 155 journals used in the latter of the two studies. In contrast, the June 1991 Journal of Economic Literature provided publication information for 249 journals. Laband and Piette note that the SSCI "Economics & Business" classification included what they regarded as 23 noneconomics journals. By comparison, they regard only eight journals indexed by the JEL as being noneconomics journals. The JEL thus indexes 109 more journals considered as economics journals than the SSCI. Laband and Piette, however, are constrained to limit their analysis to only those journals for which citation information exists, and thus rank 130 of the 155 journals for which there is SSCI data, eliminating the 23 noneconomics journals and two economics journals lacking full citation data. They comment, "We are confident . . . that our rankings include all the major economics journals published during that time [1985-89, for the 1990 ranking]" (June 1994, p. 642n).

This may be true, but that 109 economics journals (amounting to nearly an additional eighty-five percent of those they included) were not ranked, suggests that there are significant difficulties involved in determining the boundary between economics and noneconomics journals. Partly the problem here is simply that SSCI data covers too few journals. It seems, however, that there is a deeper problem associated with the fact that, in order to break out economics citation information for the "Economics & Business" classification from the whole of the social science citation data, economics must be treated as a single, undifferentiated category distinct from noneconomics. That there are so many journals listed by JEL as economics journals which are not included in SSCI data for economics journals immediately suggests that there are inherent difficulties in treating economics as a single, undifferentiated category. Presumably the SSCI attempts to avoid what may be perceived as borderline cases by excluding a large number of journals. But because there are so many additional journals the SSCI treats as noneconomics journals that JEL treats as economics journals, it seems plausible to suppose that even within the SSCI list there are journals that still have some 'noneconomics' content. That is, it seems more realistic to say that journals in and out of the SSCI "Economics & Business" classification may be characterized as economics or noneconomics in varying degrees according to their possessing different characteristics that could be developed to distinguish economics and noneconomics. Then, since presumably journals excluded from the SSCI list would then have more of the noneconomics-type characteristics, so also high (low) ranked SSCI journals would tend to have more (less) of the economics-type characteristics. Were this the case, however, it would mean that economics journals are heterogeneous products along two dimensions: (i) different journals producing different qualities of the same product, and (ii) different journals producing different products. The SSCI ranking literature would fail to capture such internal differentiation, since in excluding purportedly noneconomics journals from the "Economics & Business" list, the SSCI assumes that all journals produce the same product, and can thus be ranked along one quality index.

This suggests that while the upper boundary between economics and noneconomics journals in SSCI rankings may be reasonably well-defined, since such journals would have a preponderance of economics-type characteristics, as we move toward the lower boundary between economics and noneconomics journals it becomes increasingly difficult to explain just what the distinction between the two sorts of journals involves, and thus increasingly likely that the lower boundary between economics and noneconomics journals is not well defined. Indeed, the existence of the 109 additional JEL economics journals outside of the SSCI "Economics & Business" category suggests it may not be possible to speak uncontroversially of a lower boundary to the set of economics journals at all. It is not surprising, then, that Diamond (1989) thought to compile a list of only 27 core journals (more than 100 less than Laband and Piette and more than 200 less than JEL). Clearly, as more journals toward the lower boundary of the rankings are eliminated, the case for regarding the remaining journals as members of a single set would appear to improve. Indeed, as Laband and Piette show, the distribution of citations across economics journals is highly skewed. Ranking only 'top' journals against one another might then be defended on the grounds that, though Herfindahl Indexes indicate a decreasing concentration of citations among core journals over the time 1965-1990 time period, a Lorenz curve analysis shows that "the proportion of journals attracting the lion's share of citations did not increase," so that "in terms of both unadjusted and impact-adjusted citations, the inequality in the distribution of citations has remained relatively constant over the decades in question (June 1994, p. 655).

But there are a number of important reasons for not retreating to a system that ranks only one subset of all economics journals. First, this would mean eliminating the great majority of economics journals from any sort of rankings. Second, doing so could well be argued to have a chilling effect on innovation in ideas in economics. Third, it would discourage economics research in areas with significant noneconomics content. Fourth, truncation would produce a set of core journals that continually changed at the margin, since for any core set some journals would enter and fall out of the set of rankable journals over time according to their relative success (somewhat as English football teams enter and fall out of divisions over time).

For these reasons and possibly others, the SSCI rankings seem to have received more attention than short list approaches to ranking journals. But this somewhat more inclusive strategy for evaluating journals has its own costs, since not only do these rankings discriminate against non-SSCI economics journals, but it may be argued that they are biased against SSCI journals that share more content with other social sciences. Thus, authors may recognize that the SSCI breakout principle is meant to distinguish economics from noneconomics, and then favor journals with the apparent fewest number of noneconomics-type characteristics. For the top economics journals this may raise few questions, since their reputation for high quality would likely dominate authors' concerns about content. But distinguishing economics from noneconomics journals would likely depress the rankings of 'non-top' or lower ranked (including what Laband and Pierre term intermediate or "second-tier") economics journals, particularly where these are: (i) specialty or field journals that require significant institutional context, (ii) applied versus 'pure' theory journals, (iii) journals that include important interdisciplinary themes, (iv) journals that depart from mainstream economics, and (v) journals that employ non-standard methods.

To illustrate the possible bias involved, consider the following results of the Laband and Piette 1990 ranking. One field journal, the Journal of Labor Economics, which is the highest ranked labor journal and in twentieth position overall, has 17.1 impact adjusted citations per character to articles published in 1985-9 compared to 100.0 impact adjusted citations per character for American Economic Review, the highest ranked economics journal. A journal with many interdisciplinary themes, the Journal of Law, Economics and Organization, apparently the highest ranked journal with such themes and in twenty-fourth position overall, has 12.8 impact adjusted citations to 1985-89 articles (Laband and Piette, Table A2). Yet when evaluating the quality of economics research in these and other field and specialty journals, and when using the SSCI journals rankings to evaluate scholarly productivity, it seems intuitively wrong to say that an article published in the best labor economics or best interdisciplinary journal has less than a sixth or less than a eighth of the value of an article published in the best general-interest journal. Yet just this conclusion may be drawn by some if one uses the SSCI literature to produce weights for what may be better thought apples and oranges.

The SSCI rankings, then, create incentives for economists to conceive of economics research, not according to the logic of development of economic ideas, but according to a relatively arbitrary classification procedure designed to compartmentalize economics as a distinct field for bibliographic cataloguing purposes. But explaining the distinctiveness of economics as a field, its connections to other social sciences, and its subdivisions is something economists should be responsible for doing. In fact, the JEL Classification System for Books and Journal Articles provides evidence economists have already concluded that, just as economics and other social science journals are not compared with one another, so economics journals within subclassifications ought not be compared with one another. Laband and Piette lend support for closer attention to sub-classifications of journals in suggesting that field and specialty journals as a class appear to have prospered in recent years at the expense of second-tier general-interest journals (p. 657; also, cf. Feb. 1994). Thus compared to a system in which field and specialty journals are classified separately, the current scheme both tends to give second-tier general-interest journals lower rankings relative to top general-interest journals than they would were they only compared with the latter, and tends to give field and specialty journals lower rankings relative to top general-interest journals than they would have were they only compared to comparable journals.

It is worth noting, then, that compared to rankings of journals by sub-classifications, under the current scheme scholars publishing in both field/specialty and second-tier general-interest journals, who have their publications weighted in salary and promotion decisions by the rank values of the journals in which they appear, would have good grounds for arguing that their research productivity is being systematically undervalued. By the same token, the research of individuals publishing in the top general-interest journals, which Laband and Piette note have been remarkably successful in maintaining market dominance over the 20 years considered, could be said to be relatively overvalued through comparison with lower ranked journals. Relatedly, since the rankings of departments across universities and colleges are often tied to scholarly productivity as measured by quality of department members' journal outlets (Graves, Marchand, and Thompson, 1982; Hogan, 1984; Laband, 1985; Bairam, 1994; Conroy et al, 1995), departments with more field/specialty and second-tier general-interest publications could well argue that their faculties have been systematically under-ranked, and that the scholarly output of departments with more publications in top general-interest journals has been consistently over-ranked.(2)

Of course there are problems with ranking journals and scholarly productivity by sub-classification systems also, since some fields or specialties are difficult to define, and because papers published in many journals draw from different major JEL classifications. Thus more research into the industrial organization of economics journals would be necessary, perhaps building on or modifying JEL classifications with characterizations of journal groups in terms of high cross-citation rates. An implication of any such effort is that just as industry classifications change over time, so would journal classifications change over time. Thus it should not be thought that a comprehensive system of sub-classifications of journals for ranking purposes would be easy to construct. The argument here is merely that there are important problems involved in the existing system of economics journal rankings that urge caution in making use of those rankings in the evaluation of scholarly productivity, especially where there are significant economic and developmental implications involved. In essence, then, just as economics journals are broken out of the full list of social science journals for which the SSCI produces citation data, it seems to make sense to attempt to develop rankings that compare journals by subsets - apples with apples and oranges with oranges, rather than apples with oranges, as currently appears to be done with single rankings of journals.

III. Problem Two: SSCI "all other" Citations

The "all other" problem concerns the method of ranking journals by impact adjusted citations used in both the JEL studies discussed above. The SSCI citation data as published in the "Journal Citation Reports" is presented in a format in which under each journal entry the citations to that journal are listed according to the journals in which they appeared, beginning with the citing journal with the most citations to the cited journal, proceeding to the citing journal with the next highest number of citations to the cited journal, and so on. However, not all of any journal's citations are identified by citing journal. In every case the list is truncated, and some portion of a journal's citations are simply entered as "all other." Thus the total number of citations to a journal includes those identified according to citing journal and those that cannot be so identified. The "all other" citations seem not to have been identified by citing journal by the people compiling the SSCI data on account of the time and cost of doing so.

This presents a difficulty for the calculation of impact-adjusted journal rankings used by both Liebowitz and Palmer and Laband and Piette. Both studies use an iterative procedure to create weights for citations to journals, where the weight of a high ranked journal citation is greater than the weight of a low ranked journal citation. The rationale for doing so is clear: simply ranking journals by total citations fails to allow for quality of citation, and would permit journals with many citations from low ranked journals to rank higher than journals with fewer citations from high ranked journals. However, since producing impact-adjusted rankings requires one identify the citing journal, all a cited journal's citations from citing journals that fall in the "all other" category must be ignored, that is, given a weight of 0 in the iterative procedure. Thus the impact-adjusted ranking method works with only a portion of the total number of citations to each journal, namely, those identified by citing journal in the short list under the entry for the cited journal.

Unfortunately, the percentage division of citations between those identified by citing journal and those listed as "all other" varies randomly across the list of journals. Thus one journal may lose, say, forty percent of its citations unidentified as "all other," and another journal may lose, say, twenty percent of its citations listed as "all other." Alternatively, the first journal has sixty percent of its citations available for ranking, whereas the second journal has eighty percent of its citations available for ranking. In a simple comparison of 1985-9 journal rankings for total citations including "all other" citations and 1985-9 journal rankings for total citations excluding "all other" citations, it was found that though the first nine journals are identical in rank in both lists, the remaining journals change up and down in rank, in some instances by a considerable number of places. For example, the Journal of Economic Literature is ranked twentieth when the "all other" is included, but is eighty percent lower at rank thirty-six when the "all other" is excluded. Economic Inquiry is ranked forty-seven when the "all other" is included, but is thirty-four percent lower at rank sixty-three when the "all other" is excluded. Some journals out of the top nine admittedly hardly change in rank at all, but this must be accidental, considering the variability of their neighbors. In general, since the impact-adjusted ranking is based on citations excluding the "all other," journals that have large (small) percentage of their total citations in the dropped "all other" category tend to do worse (better) in the final impact-adjusted rankings than were their "all other" citations identified and included.

The "all other" problem, then, creates a pair of unattractive options regarding the use of SSCI information. On the one hand, as the two JEL ranking studies correctly argue, one ought not rank journals by total citations only (even adjusting for such things as characters per page as both the major studies cited here do), since doing so ignores citation quality. On the other hand. because the SSCI "Journal Citation Reports" allow the percentage of "all other" citations to vary, the impact-adjusted procedure designed to take quality of citation into account cannot rank most economics journals reliably.

IV. Concluding Remarks

These points imply that one should only apply the impact-adjusted rankings using SSCI data with considerable caution when evaluating scholarly productivity of individuals and departments. One might yet conclude from these last points that scholarly productivity can still be evaluated, if more crudely and somewhat arbitrarily, in terms of the number of a scholar's top journal publications. It seems, however, that the apples-and-oranges difficulties regarding identification of the boundaries delimiting economics journals from non-economics journals raises serious questions about this strategy. Authors whose research approaches the discipline's boundaries are generally less likely to publish in top journals than authors who publish research clearly distinguished from non-economics research. But it seems inappropriate to evaluate scholarly productivity in terms of orientation within the discipline rather than in terms of quality of contribution, and it also seems clear that high quality papers appear in field and interdisciplinary journals that are not top ranked.

It was noted at the outset that this paper does not question the industrial organization interpretation the two impact-adjusted ranking method papers develop, and that indeed we hope to reinforce some of the conclusions reached by Laband and Piette regarding the current industrial organization of economics journals. One conclusion to be drawn from the Alston et al 1992 study of the opinions of US economists regarding the current state of economics is that there may be incentives for economists to differentiate their products from one another. A major finding of Laband and Piette is that recent decades have experienced a proliferation of specialty journals, where the "rapid entry by and success of field journals surely reflects the advantages of specialization" (p. 657). No doubt most economists would regard this development as healthy. It also seems to indicate where the value in the SSCI economics journals rankings may lie. It does not lie in-providing a means of evaluating scholarly productivity, but rather in providing a broad-brush picture of the overall development of the discipline.

Notes

1. Diamond (1989) developed a list of 'core' economics journals which has generated considerable controversy (Hodgson, 1993; Burton and Phimister, 1995). Also see Conroy et al (1995).

2. A further implication is that SSCI-driven journal and department rankings may also distort graduate and undergraduate education through research biases passed on to students that the rankings perpetuate across colleges and universities, across types of economics departments within institutions, and within faculties in departments.

References

Alston, R., Kearl, J. R. and Vaughan, M. B. "Is There a Consensus Among Economists in the 1990s?" American Economic Review, May 1992, 82(2), pp. 203-209.

Bairam, E. I. "Communication: Institutional Affiliation of Contributors to Top Economic Journals, 1985-1990," Journal of Economic Literature, June 1994, 32(2), pp. 674-679.

Burton, M. P. and Phimister, E. "Core Journals: A Reappraisal of the Diamond List," Economic Journal, March 1995, 105(2), pp. 361-73.

Conroy, M. E., and Dusansky, R., with Drukker, D. and Kildegaard, A. Journal of Economic Literature, December 1995, 33(4), pp. 1966-1971.

Diamond, A. M. "The Core Journals in Economics," Current Contents, January 1989, 21, pp. 4-11.

Graves, P. E., Marchand, J. R., and Thompson, Randall. "Economics Departmental Rankings: Research Incentives, Constraints, and Efficiency," American Economic Review, December 1982, 72 (5), pp. 1131-41.

Hodgson, G. "The Ranking of Heterodox Economics Journals in the UK Research Funding Exercise," European Association for Evolutionary Political Economy, mimeo, 1993.

Hogan, T. "Economics Departmental Rankins: Comment," American Economic Review, September 1984, 74(4), pp. 827-33.

Institute for Scientific Information. Social Sciences Citation Index. Philadelphia, PA: 1990.

Laband, D. N. "An Evaluation of 50 'Ranked' Economics Departments by Quantity and Quality of Faculty Publications and Graduate Student Placement and Research Success," Southern Economic Journal, July 1985, 52(1), pp. 216-40.

Laband, D. N. and Piette, M. J. "Favoritism Versus Search for Good Papers: Empirical Evidence On the Behavior of Journal Editors," Journal of Political Economy, February 1994, 102(1), pp. 194-203.

Laband, D. N. And Piette, M. J. "The Relative Impacts of Economics Journals: 1970-1990," Journal of Economic Literature, June 1994, 32(2), pp. 640-666.

Liebowitz, S. J. And Palmer, J. C. "Assessing the Relative Impacts of Economics Journals," Journal of Economic Literature, March 1984, 22(1), pp. 77-88.

John B. Davis, Associate Professor of Economics, Marquette University, and Editor, Review of Social Economy. The author gratefully acknowledge helpful comments from Philip Arestis, Robert Drago, Marianne Ferber, Daniel Hamermesh, Lee Hansen, Geoff Hodgson, David Laband, Eric Nilsson, Robert Toutkoushian, David VanHoose, and the research assistance of A. J. Charri.