Academic journal article Public Administration Quarterly

The Measurement, Evaluation, and Publication of Performance in Higher Education: An Analysis of the CHE Research Ranking of Business Schools in Germany from an Accounting Perspective

Academic journal article Public Administration Quarterly

The Measurement, Evaluation, and Publication of Performance in Higher Education: An Analysis of the CHE Research Ranking of Business Schools in Germany from an Accounting Perspective

Article excerpt

RESEARCH ISSUE AND LITERATURE GAP

In the context of the implementation of accounting methods and principles by university managements, the possibilities and limitations of university performance measurements are frequently discussed. Whereas the costs, e.g. in terms of staff assignment or expenses, can be recorded fairly easily, the assessing of a university's benefits is more challenging, because basically no exogenous market prices for such benefits exist. The conduction of performance measurements in practice is usually done by external institutions rather than by the universities themselves. Accordingly, such institutions acquire and aggregate the necessary data under their own name before publishing their findings. Generally adhering to a specific method of evaluation, they aggregate predefined performance criteria in order to create ratings or rankings of whole universities or individual departments.

There are numerous different rating and ranking systems for evaluating and comparing universities both in an overall manner or an academic discipline-specific approach. For instance, the US News and World Report publishes rankings that assess US-American colleges and primarily focus on the quality of education by using indicators that are based on facts and surveys. In contrast, the National Research Council in USA evaluates universities' doctoral programs by academic discipline, using various facts and survey-based indicators. Worldwide attention is paid to the QS World University Ranking and the Academic Ranking of World Universities (Shanghai Ranking), the former evaluating diverse aspects of performance and the latter primarily addressing research performance.

The stated objective of the evaluating institutions is usually that of creating transparency of a university's performance in order to ensure comparability. These institutions perceive themselves to be purely information mediators for interested stakeholders, which includes prospective students, and government bodies as the main financers. Even though it is not the primary objective, these evaluations do create pressure on the analyzed universities to perform well in accordance with the criteria used and thus to achieve favorable assessments in subsequent evaluations. Hence, universities and/or departments compete to achieve their objectives. In this context, such university assessments or rankings are becoming vitally important in terms of public recognition and ultimately a university's reputation. A high university-specific or department-specific reputation is certainly useful, especially if the competitive situation makes it necessary to attract qualified students and outstanding researchers, and even for retaining governmental financial support in an era of decreasing state contribution.

Performance and improvement incentives emerging from evaluations are not a problem per se. However, this premises the disclosure of the respective performance criteria and of the procedure used for acquiring and aggregating the data for creating the overall evaluation. It also has to be mandatorily assured that the evaluated universities or departments are, as peer groups, rivals in competitive situations, and therefore comparable with regard to their services and objectives. However, such clarity is non-existent in Germany's higher education sector.

Due to this lack of clarity, such rankings are controversially discussed by the academic community in general and more specifically by the evaluated scientists themselves. Especially the key performance indicator-based measurements of academic research performance are the subject of critique (e.g. Frey, 2007; Jarwal, Brion, & King, 2009; Kieser, 2012). In addition, there are several studies that compare the quality of university ranking procedures (e.g. Tavenas, 2004; Usher & Savino, 2006; Stolz, Hendel, & Horn, 2010). But, since these studies are not based upon a theoretical foundation, they have rather a practical character in terms of benchmarking or product tests. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.