Academic journal article Journal of Sociology

Why Are the Most Influential Books in Australian Sociology Not Necessarily the Most Highly Cited Ones?

Academic journal article Journal of Sociology

Why Are the Most Influential Books in Australian Sociology Not Necessarily the Most Highly Cited Ones?

Article excerpt

Evaluating research is difficult. Evaluating research in the social sciences, arts and humanities is extremely difficult. Evaluating research in the social sciences, arts and humanities by applying bibliometric (publication- or citation-based) methods seems to be impossible. These fields produce and communicate knowledge in a specific way that differs from that of the natural sciences. (1) They might therefore well apply their own concept of scientific quality, which would require specific bibliometric indicators, a specific interpretation of existing indicators, or even complete abdication of bibliometric indicators. Differences between modes of knowledge production also cause limitations for using publication databases in the social sciences, arts and humanities. Following the Science Citation Index (SCI), the Social Science Citation Index (SSCI) only covers journals of international importance. This means that books, which are a major output in social sciences, arts and humanities, are not included in that database. Nor are many journals devoted to nationally specific subjects, because these journals usually receive less international attention (Nederhof et al., 1989; Nederhof and Zwaan, 1991).

These are only two of the problems facing any bibliometric evaluation in the social sciences, arts and humanities. The current standard practice is to mention these and other limitations and then to provide some sort of evaluation nevertheless (Phelan, 2000; Najman and Hewitt, 2003). This practice of presenting the author as methodologically considerate and then offering questionable evaluations is regrettable. If an evaluation is published, damage is done in any case because the results will stick independently of methodological considerations.

Given the strong political demand for quantitative evaluation, the methodological problems call for inquiries into what can and should be done with bibliometric indicators, and what should be avoided. (2) A particularly good occasion for exploring the applicability of bibliometric indicators is any instance of peer review, because in a peer review the scientific community's concept of quality is applied. An even better occasion is a peer review that is not burdened by funding decisions and the inevitable political considerations accompanying them.

Thus, the vote for the Most Influential Book in Australian Sociology (MIBAS) appeared to be a good opportunity to compare the concept of influence applied by Australian sociologists and its relation to citation-based indicators, which are supposed to measure influence. By comparing several indicators to the ranking undertaken by the peer review voting process, the comparison will shed light on the applicability of bibliometric indicators in evaluations of sociological work. It may also contribute to our knowledge about the role books play in the knowledge production of the social sciences. The aim of this article is therefore to explore the concept of influence by comparing the MIBAS vote to several bibliometric indicators that can be constructed to measure different aspects of influence quantitatively.

Methodology, data and methods

Comparing peer reviews to bibliometric evaluations--state of the art

Since bibliometric indicators were introduced in the 1960s, their validity has been a point of concern for both the sociology of science and the emerging field of bibliometrics. It became apparent that this validity will never be conclusively determined because there is no yardstick for the underlying concepts of scientific productivity, influence or quality. In order to assess the validity of bibliometric indicators, the latter have been compared to various quality assessments by peer review. The very first empirical study that applied bibliometric indicators analysed the research of 120 US-American physicists (Cole and Cole, 1967). To validate their bibliometric indicators, Cole and Cole used an existing ranking of US-American university departments, Academy memberships and academic prizes. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed


An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.