Measuring Canadian Business School Research Output and Impact
Erhan Erkut, Canadian Journal of Administrative Sciences
The goal of this paper is to quantify both the output and the impact of the past decade's scholarly research carried out by those academics currently employed by Canadian business schools, using journal paper counts and citation analysis. We find that the per capita paper output in Canadian business schools is relatively low and is declining. We also find that there are significant differences across Canadian business schools, and that the paper and citation credits are highly variable, with a few "stars" producing most of the impact.
Dans cet article nous essayons de mesurer le rendement et l'impact de la recherche academique dans les ecoles canadiennes d'affaires pendant la derriere decennie, en utilisant le comptage des articles publies et le nombre de citations dans un ensemble des revues specialisees. Nous constatons que la production des articles par les ecoles canadiennes est relativement basse et que la tendance va en empirant. De plus, nous constatons qu'il y a des differences significatives parmi les ecoles canadiennes en termes de production academique, avec quelques chercheurs responsables pour la majorite de la production totale.
The goal of the study described in this article is to produce a quantitative snapshot of the management research produced by Canadian business schools during the past decade. The research questions of interest in this study include: how many papers have been published by Canadian business academics; where have these papers been published; how many times have these articles been cited; how do Canadian business schools compare to one another in aggregate statistics; and how are the output and impact measures distributed among members of a school? The results of our study may be of interest to stakeholders such as potential students, employers of graduates, university administrators, potential donors, funding agencies, and governments. This paper discusses our findings, while an accompanying web site (http://www.bus.ualberta.ca/citationstudy2/) contains the data we used and a full tabulation of the results.
The measurement of various characteristics of universities, faculties, and programs is a fairly common practice. There are many academic papers that compare research outputs of departments or faculties (e.g. Doyle & Arthurs, 1995; Im, Kim, & Kim, 1998; Klemkosky & Tuttle, 1977; Niemi, 1988; Trieschmann, Dennis, Northcraft, & Niemi, 2000). Perhaps the most widely recognized measurements are those produced by newspapers and magazines in the form of annual rankings (Business Week, U.S. News & World Report, Financial Times, Maclean's, Canadian Business). While some might argue that these rankings are less than perfect in their design and execution, there is some evidence that schools take such rankings somewhat seriously. For example, shortly after the release of the Financial Times rankings in February 2002, both Ivey and Toronto took out full-page ads in a national newspaper, one emphasizing its current ranking and the other emphasizing its trend in the rankings.
Usually these rankings are generated by using a weighted combination of scores on criteria such as incoming student marks, salary of graduates, satisfaction ratings from alumni, and opinions of corporate recruiters. Recently Business Week's rankings of MBA programs included a new criterion called "intellectual capital," defined as "scholarship and the ability to influence thinking in the business world" (Business Week, 2000). The magazine generated a short list of 12 prestigious and influential business journals, via a poll of business school deans, and credited schools for publications appearing in these journals during the last five years. Likewise, Financial Times uses research output, as defined by the number of publications in 35 selected journals during the most recent three years, in its annual MBA program rankings. Although we believe that research impact is at least as important as research output, we know of no popular ranking exercise that makes an attempt to measure this criterion. …