Academic journal article Partnership : the Canadian Journal of Library and Information Practice and Research

An Examination of Bibliometrics in Calls for Major Canadian Research Awards

Academic journal article Partnership : the Canadian Journal of Library and Information Practice and Research

An Examination of Bibliometrics in Calls for Major Canadian Research Awards

Article excerpt

Introduction

The academic librarian's role in bibliometrics is one that appears to be increasing in both nature and scope. A symposium in late 2016 held at the National Institutes of Health in Bethesda, Maryland showcased the vast array and sheer number of information professionals charged with some form of bibliometric analysis across Canada and the United States. Whether in the form of reports or visualizations, librarians are moving forward and taking steps to play a more prominent role in this aspect of research assessment.

Evidence of this can also be seen in LIS literature. For example, in his OCLC report "Research Assessment and the Role of the Library", MacColl (2010) calls on libraries to acknowledge disciplinary differences in research products, use library usage statistics as an additional form of research output assessment data, and liaise with faculty on the topic of bibliometrics. Bladek (2014) seconds this notion and points to a suite of reasons why librarians are well-suited to assume this role, perhaps most importantly because of their history of and frequent "work with scholarly databases and indexes that track citations and other metrics" (p. 332).

Aside from learning the ins and outs of common bibliometric indicators (MacColl, 2010) and exploring new software and bibliometric analysis products on the market (Bladek, 2014), librarians also stand to benefit from exploring bibliometrics in greater depth. If asked to assist in award nomination preparation at their institutions, Canadian academic and special librarians may be well served if they have an understanding of how metrics fit into the Canadian award context and what factors they need to consider. Accordingly, this paper explores fifteen major Canadian award calls in an effort to determine whether bibliometrics are being asked for as part of the application process. This research also aims to determine whether there are indications that including bibliometric values in the award application process could be used to strengthen those applications.

Context

Broadus (1987) defines bibliometrics as the quantification of "physical units of publications, bibliographic citations, and surrogates for them" (p. 377). Citation counts and the h-index are examples of metrics that often get mentioned in discussions of research impact. Citation counts address the number of times a researcher's work appears in the reference list of other scholarly works. The h-index takes both the number of documents a scholar has created and the number of citations a researcher's work has received into account, and was defined by Hirsch (2005) as:

"A scientist has index h if h of his or her Np papers have at least h citations each and the other (Np - h) papers have < h citations each" (p. 16569).

Haustein and Lariviere (2015) contrast these "basic or simple bibliometric indicators" (p. 128) with normalized values, values that account for different rates of citation between disciplines or document type, or the amount of time that has passed since publication. This is done by taking the expected citation count (calculated using the set of all papers in a specific subject area, of a specific document type, and published at the same time) and comparing it to the citation count achieved by that document (Haustein & Lariviere, 2015).

Regardless of whether bibliometric values have been normalized or not, their use in research assessment, especially at the level of the individual due to "the large fluctuations of the numbers at such a microscale" (Gingras, 2016, p. 9), is a point of contention. Although the use of bibliometrics was viewed as one way to reduce the academic community's reliance on the subjective nature of peer review (Gingras, 2016), Gingras (2016) points to the necessity of human judgement when looking at bibliometric values, based on the fact that numbers can in no way tell the entire story of a particular researcher's work. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.