Academic journal article Journal of Research Administration

New Decision Tool to Evaluate Award Selection Process. (Applied Research)

Academic journal article Journal of Research Administration

New Decision Tool to Evaluate Award Selection Process. (Applied Research)

Article excerpt

Introduction

Established by the Government of Alberta in 1979, the Alberta Heritage Foundation for Medical Research (AHFMR) supports health research at Alberta universities and other research-related institutions. The foundation supports nearly 230 faculty-level researchers recruited from Alberta and around the world, and approximately 500 researchers-in-training (i.e., summer students, graduate students, and post-doctoral fellows, collectively known as trainees). The AHFMR's gross expenditure for fiscal year (FY) 2000-2001 was approximately $53 million, of which $6.7 million (12.6%) was committed to the funding of trainees. (1) This article describes the foundation's initiative to improve the peer review process for its competitive training awards.

Peer review is frequently used for both ex ante and ex post evaluation of the quality of the scientific enterprise (Geisler, 2000; Kostoff, 1992; Luukkonen-Gronow 1987; United States General Accounting Office, 1997). Ex ante evaluation assesses quality in advance of performance, as in the case of applications for research funding. Conversely, ex post evaluation assesses quality retrospectively, as in the case of papers submitted to scientific journals. The case described here entails ex ante review of applications for funding, to anticipate the future performance of research trainees.

The AHFMR's original review process for training award applications considered three general criteria: (a) the quality of the candidate, (b) the appropriateness of the proposed research environment, and (c) the merit of the proposed research project. Applications were rated following a multiple-step committee process on a scale of 0 to 5, the single score representing an aggregation of performance in relation to all criteria. Zero is considered an unacceptable application whereas a score of 5 is an outstanding application. This approach was used by the foundation to review applications for its training awards until the end of FY2000, when the foundation piloted the new process described here.

Geisler (2000) suggested that peer review should be well-defined, rational, fair, timely, cost-effective, anonymous, and responsive. While most of these general characteristics were reflected in the AHFMR's original review process for its training awards, a number of specific issues provided the incentive for the foundation to try to improve the process.

First, the number of proposals submitted was increasing and there was a need to more efficiently evaluate them. In FY1997, the AHFMR received 182 applications for full-time studentships, as compared to 276 in FY2000 and 307 in FY2001. This resulted in the need for more reviewers, most of whom were reporting that they had increasingly less time to devote to such activities. Also, the increase in proposals meant that committees were faced with extending the duration of their meetings or spending less time reviewing each application, neither of which was considered to be a desirable alternative.

This issue was complicated by an increase in turnover on the foundation's review committees. In general, this may have been in response to reviewer fatigue, a recent and widespread phenomenon in the research funding sector resulting from a proliferation of requests to individuals to sit on review panels (Brzustowski, 2000a; Brzustowski, 2000b; Cunningham, Boden, Glynn, & Hills, 2001; Smith, 2001). There was a sense that turnover resulted in less consistency in the application of criteria within and between competitions, and an increased administrative burden in recruiting and training committee members.

Two trends relating to scores awarded to applications also influenced the AHFMR's decision to redesign its review process. In theory, the overall score awarded to each application represented an integration of all parts of the application; however, in practice each reviewer's interpretation resulted in variable weighting of different criteria. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.