Academic journal article Journal of Research Administration

Evaluating Research Administration: Methods and Utility

Academic journal article Journal of Research Administration

Evaluating Research Administration: Methods and Utility

Article excerpt

Introduction

Metrics are "a means of representing a quantitative or qualitative measurable [emphasis in original] aspect of an issue in a condensed form" (Horvath, 2003, as cited in Kreimeyer & Lindemann, 2001, p. 75). Consequently, performance metrics represent "[m]ea sures used to evaluate and improve the efficiency and effectiveness of business process" (Cole, 2010, p. 14). Examples of quantitative metrics used in the field of research administration include success rate (number of submitted proposals accepted for funding), dollar amount of funding applied for and received, and number of applications submitted. Customer feedback on research administration sendees is an example of qualitative metrics. The benefits of developing and implementing metrics for research administration offices include defining and monitoring business processes and their impact, defining responsibilities, managing expectations, improving decision making and prioritization, motivating teams and evaluating staff performance (Haines, 2012). These benefits can be condensed to three areas: changing behavior, driving performance, and supporting investments in research administration (Taylor, Lee, & Smith, 2014, slide 5).

Current use of metrics in evaluating research administration

Analyzing metrics in relation to sponsored funding and measuring research productivity is a well-established business practice among academic institutions with a research mission or focus. The University of Minnesota, for example, tracks data related to expenditures; publications and indicators of faculty reputations; proposals and grant awards; invitations and collaborations; indirect cost recovery; student engagement in research; space allocations; and other "common research metrics" (University of Minnesota, 2008, p. 10). Some institutions have incorporated metrics into their daily operations. The University of Iowa posts weekly "Homepage Metrics" on its Division of Sponsored Programs' website (http://dsp.research.uiowa.edu). These metrics consist of the numbers of routing forms that were received; submitted proposals; completed contracts; non-monetary agreements and subawards; and processed awards, and are calculated weekly and during the fiscal year to-date.

Those institutions that do not already use metrics to guide and evaluate their work are now outside the norm. A recent informal survey of research administrators for the Society of Research Administrators (SRA) International's electronic newsletter, Catalyst, found that most research administration offices (78% of those who responded) conduct some kind of evaluation of their services (Davis-Hamilton, 2014). The most commonly used evaluation methods reported include collection of informal feedback from customers, examination of existing management reports and data, and comparison of current internal operational data to those from prior periods.

Pitfalls of current metrics used to evaluate research administration

While the metrics discussed above can be useful and informative assessment tools, some scholars feel that metrics based on financial or other quantitative measures "do not sufficiently capture the quality of the level of service demands" placed on research administration (Cole, 2010, p. viii). By "reducing the complexity of the representation of an issue" quantitative metrics "tend to oversimplify or omit dependencies of an issue, thus making the representation incomplete" (Kaplan & Norton, 1992, as cited in Kreimeyer & Lindemann, 2001, p. 87).

Furthermore, the external environment influences traditional quantitative metrics, like success rates, making it difficult to evaluate the merit of the activities internal to the institution. This can be illustrated by looking at success rates from the perspective of the PESTEL framework, a tool used to identify the external opportunities and threats that may impact an institution's operation. The PESTEL framework organizes these external "forces" into six major categories: Political, Economic, Socio-cultural, Technological, Ecological, and Legal (Rothaermel, 2013, pp. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.