Academic journal article Academy of Educational Leadership Journal

The Misuse of Student Evaluations of Teaching: Implications, Suggestions and Alternatives

Academic journal article Academy of Educational Leadership Journal

The Misuse of Student Evaluations of Teaching: Implications, Suggestions and Alternatives

Article excerpt

INTRODUCTION

A 2005 essay by Fleet and Peterson highlighted an issue that has been troubling academics for the past 80 plus years. The premise of the essay was that while teaching is as important an element of academics' responsibilities as research, no universal platform for evaluating this activity exists. While Fleet and Peterson acknowledge that student evaluation of teaching (SETs) are currently the primary means of evaluating this performance, there is little consensus regarding the unquestioning acceptance as the most appropriate means to do so. In fact the consensus of many is that it is not (e.g., Galbraith et al. 2012). Some have even argued that the way in which universities are using SETs are changing the focusing of college teaching where the faculty member is seen more as a salesperson and the student as a customer, d'Apollonia and Abrami (1997) point out that the vast majority of post-secondary schools use student evaluations as one of, and often the most important measure of teaching effectiveness. The Carnegie Foundation conducted a study that determined that approximately 98 percent of universities administered some form of SETs (Magner, 1997). Business schools were reported by Comm and Mathaisel (1998) to exceed that level with over 99 percent using SETs. In fact, findings by Anderson and Shao (2002) found that business school administrators felt that SETs were the second most important component in evaluating teaching performance, eclipsed only by currency in field. This usage has been contributed to two major initiatives a significant increase in accountability of public institutions to state governments due to public pressure, and the increased emphasis of accrediting bodies such as The Association to Advance Collegiate Schools of Business (Simpson & Siguaw, 2001; Ballantyne, Borthwick & Packer, 2000).

Historically when using the results of SETs, academic units have been treated as either a normal distribution or as a bimodal distribution. Performance was seen as existing on a continuum with individual performance either below or above a standard, typically the mean. Analysis of five years of SETs for a business school at a southwestern university revealed that a normal distribution did not exist.

BACKGROUND

There have been over 2,000 articles published relating to the use of SETs in assessing teaching effectiveness. Even today after this plethora of research, no consensus exists within the academic community regarding their use. Regardless of this ongoing debate regarding their usage, there is little likelihood of SETs being replaced by another assessment tool. In fact, over 18 percent (Comm &Mathaisel, 1998) of schools limit their evaluation of teaching to only student evaluations. Based upon the likelihood of continued (and possibly increased) emphasis on their usage, it is critical to explore the improved administration of these instruments. It is also important that academics and administrators understand as much as possible about the criticisms as well as positive attributes often applied to SETs.

SETs Statistical Analysis

Statistical reports of SETs have been both positive and negative. Marsh (1987) stated "student ratings are clearly multi-dimensional, quite reliable, reasonably valid, relatively uncontaminated by many variables often seen as sources of potential bias, and are seen as useful by students, faculty and administrators" (p. 369). This lack of contamination was further confirmed by Lersch and Greek (2001) who examined instructor attributes and found no statistical reports of specific attributes affecting the results of SETS. And, some research suggests SETS are reasonably reliable and stable when using a sufficient sample size (Centra, 1979; Overall & Marsh, 1980; Sixbury & Cashin, 1995).

Ross (2005) disputes this claim and contends that there have been few if any statistical procedures used to verify reliability or validity of SETs. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.