Academic journal article Higher Learning Research Communications

Professor Gender, Age, and "Hotness" in Influencing College Students' Generation and Interpretation of Professor Ratings

Academic journal article Higher Learning Research Communications

Professor Gender, Age, and "Hotness" in Influencing College Students' Generation and Interpretation of Professor Ratings

Article excerpt

Introduction

University instructors have long been regularly evaluated by their own students. In recent decades, opportunities for college students to express their opinions of their professors have expanded beyond formal university-administered group evaluations to posts on informal websites like RateMyProfessors.com (RMP; Johnson & Crews, 2013). RMP is a website on which university students may post anonymous evaluations of their professors. Since its inception in 1999, the service has become immensely popular throughout the world; users have created over 17 million ratings for 1.6 million professors in Canada, the United Kingdom, and the United States (RMP, 2016a). Sites similar to RMP operate in other countries; for example, Rate My Teachers for Republic of Ireland, Australia, and New Zealand (i.e., ratemyteachers.com). In addition to writing narrative comments about their professors, students use RMP to rate their professors on helpfulness, instructional clarity, and course easiness using a rating scale of 1 (low) to 5 (high). Scores ranging from 3.5 to 5 are considered "good," scores ranging from 2.5 to 3.4 are considered "average," and scores ranging from 1 to 2.4 are considered "poor" (RMP, 2016b). Helpfulness and instructional clarity ratings are averaged to produce an overall quality score (RMP, 2016b). Moreover, students also rate professors as "hot" or "not hot"; a chili pepper icon appears on the RMP profile of professors whose aggregate hot ratings exceed their not-hot ratings (RMP, 2016c). While RMP does not explicitly state that the chili pepper icon represents a professor's physical attractiveness, many assume this to be true (Landry, Kurkul, & Poirier, 2010).

The impetus for RMP and similar websites is to provide students with a forum to exchange information about their professors and courses (RMP, 2016a). Given that the results of universityadministered teaching evaluations are typically inaccessible to students (Kindred & Mohammed, 2005), RMP offers students a publically available platform for the sharing of course and professor data. By providing potential students with otherwise inaccessible ratings from former students (Kindred & Mohammed, 2005), those considering the professor may inform their enrollment choices in hope of receiving a higher quality college education (Davison & Price, 2009; Johnson & Crews, 2013). Not surprisingly, concerns have been voiced about potential bias in online ratings, and many professors doubt the utility of sites like RMP for students truly seeking a higher quality education (Boswell, 2016; Davison & Price, 2009).

Professors cite several sources of concern regarding the validity of RMP ratings (e.g., Davison & Price, 2009; Hartman & Hunt, 2013; Sonntag, Bassett, & Snyder, 2009). First, there is no guarantee that ratings have actually been posted by former students of the professor (Johnson & Crews, 2013; Montell, 2006; Otto, Sanford, & Ross, 2008; Timmerman, 2008). For example, the first and second authors both have at least once been rated on RMP for classes they have never taught, possibly a result of students not correctly remembering the names of the actual instructors. While instances such as this may produce laughter and seem fairly harmless, more alarming cases have been recorded involving negative postings made by rivals or disgruntled colleagues instead of students (see Carnevale, 2006). Second, even when postings are crafted by actual students, concerns remain about the validity of such postings as reflections of teaching quality or as windows into what potential students may expect from taking a class with a particular professor (Legg & Wilson, 2012). For example, students self-selecting to participate in RMP posting may harbor deeply felt or extreme views and may not represent a professor's general student body (Boswell, 2015; Legg & Wilson, 2012). Further concerns exist about possible biasing factors shaping how online professor ratings are both interpreted and created. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.