The media has recently exposed that grade inflation is a concern for higher education in North America. Grade inflation may be due to consumerism by universities that62 now compete for students. Keeping students happy (and paying) may have been emphasized more than learning. We review the literature on faculty evaluation and present a model that incorporates students' individual differences and grade inflation as sources of bias in teaching evaluations. To improve teaching effectiveness, and avoid consumerism in higher education, faculty evaluations must begin to focus on students and the reciprocal role of grade inflation in teaching evaluation.
Today, faculty are being held accountable for how well they serve the U.S. student population, and it has become common practice in universities and colleges for students to "grade" the professors that grade them. Grade inflation has become an issue in higher education; students' grades have been steadily increasing since the 1960's (Astin, 1998). In June 2001, a record 91 percent of Harvard seniors graduated with honors, and 48.5 percent of grades were A's and A-minuses (Boston Globe, 2001). Grade inflation has been under scrutiny, and there is a need to address exponential grade inflation (Berube, 2004). Several studies have linked grade inflation with students' ratings of faculty (Greenwald, 1997; Stumpf & Freedman, 1979). According Pfeffer and Fong (2002): "Grade inflation is pervasive in American higher education, and business schools are no exception" (p. 83).
Students' ratings of management faculty now serve dual purposes. First they provide faculty with feedback on teaching effectiveness. They are also used for faculty reappointment, promotion and/or pay increase decisions (Jackson, Teal, Raines, Nansel, Force, & Burdsal, 1999). Yet, Scriven (1995) identified several construct validity problems with student ratings of instruction, one of them being student consumerism. Consumerism results in bias due to information not relevant to teaching competency, but important to students such as textbook cost, attendance policy, and the amount of homework. Due to the impact on tenure and career, faculty might try to influence student evaluations, a phenomenon referred to as "marketing education," or even seduction (Simpson & Siguaw, 2000). Some have become alienated from the process of teaching evaluation entirely. Professors who have become hostile to evaluations (Davis, 1995) often do not use the feedback they receive in constructive ways (l'Hommedieu, Menges & Brinko, 1997).
Faculty Evaluations as Performance Appraisals
Since student ratings of faculty teaching effectiveness are used as one component of faculty evaluation, it seems reasonable to consider these instruments as performance ratings. As such, they are subject to a number of possible biases, as has been shown in the literature on rating accuracy in Industrial and Organizational Psychology (Campbell, 1990; Murphy & Cleveland, 1995). A number of studies have indicated problems with the reliability of performance ratings (Christensen, 1974; Wohlers & London, 1989). As noted by Viswesvaran, Ones & Schmidt (1996), "... for a measure to have any research or administrative use, it must have some reliability. Low-reliability results in the systematic reduction in the magnitude of observed relationships ..." (p. 557). The accuracy of performance evaluation ratings has been challenged as well (Murphy, 1991). This research has led to recommendations for improvement of rating accuracy. For example, Murphy, Garcia, Kerkar, Martin and Balzer (1982) reported that the accuracy of performance ratings is improved when ratings are done more frequently. However, faculty evaluations, in most cases, are only at the end of the course, leaving greater possibility for error. Other research has reported problems due to individual differences such as leniency or stringency (Bernardin, 1987; Borman, 1979; Borman & Hallam, 1991). …