Student evaluations of courses they have undertaken are used extensively in different higher education institutions. At the end of a semester, a questionnaire is given to each student. Students are free to offer their feedback on the usefulness of the course, the clarity of the material presented and the coherence of instructions. They can give suggestions on improvements and rate their overall level of satisfaction.
Although student evaluations content and design vary significantly in different universities, there are common topics that are usually included. Typically, there are questions about the lecturers' communication skills, interaction with the students, preparation for classes, attitude toward students, availability after classes, knowledge of the subject and presentation clarity. Having in mind the diversity of courses offered, the range of student demographics and the personal preferences, it is obvious that creating a universal questionnaire and interpreting the results fairly is a serious challenge.
Usually student evaluations are used for two main purposes. The first one is the summative evaluation. It is used to evaluate the lecturer's general ability to teach and his or her effectiveness. This type of evaluation is useful for the administration, as it provides judgment on the teacher's performance as a whole. The second type of evaluation, called formative, gives the opportunity to analyze the teacher's strengths and weaknesses and consequently, improve the quality of the course.
The two types of evaluation present different difficulties in interpreting the results. Students' general evaluation on the teacher's performance may vary dramatically. Research shows that there are various factors that may influence student evaluations, such as the time of day when the class takes place, the anticipated mark of the student, whether the course is required or elective, the class size and the student's willingness to take the course. As far as formative evaluation is concerned, the main issue is whether students are able to identify and discriminate between different teaching methods and adequately rate them as positive or negative.
Fairness of evaluations calls for well-formulated, precise questions that give the students very little possibility of subjectivity and bias. Very often general, open-ended questions result in so varied responses that interpretation of results becomes an extremely daunting, if not impossible task. A collection of articles, Techniques and Strategies for Interpreting Student Evaluations, edited by Karren G. Lewis from the University of Texas, Austin, provides useful guidelines for writing questions.
In his article, Writing Teaching Assessment Questions for Precision and Reflection, William L. Rando suggests that questions should be designed according to the specific subject, the teaching method and aims of the course. For example, questions about presentation clarity would not be of much use for a teacher who relies on class discussions as a method. Rando defines four different types of questions: self-report questions that ask students to evaluate aspects of their learning experience ("In what ways does the class discussion help you achieve your goals for this course?"), direct assessment of student learning ("Today I talked about two theories of decision making. Think about a decision you've made recently and describe it in the terms of each theory"), open-ended and closed-ended questions. These four types should be combined and included in a questionnaire so as to cover all aspects of teaching. Open-ended, self-report questions demonstrate students' experiences; closed-ended, self-report questions identify specific aspects of the learning process; open-ended direct assessment questions show what students are learning in general and closed-ended, direct assessment questions identify specific areas of understanding or confusion.
When interpreting the results, it is important to view them not as absolutes but in terms of peer comparison and classroom context. Furthermore, research indicates that student evaluations of teachers' behavior are not inherently synonymous to students' learning but rather indicate student satisfaction. In an article published in Journal of Social Works Education in 2003, Terry Wolfer and Miriam McNown Johnson argue that student evaluation is actually a measure of client satisfaction rather than an objective feedback on teaching practices. For example, a survey carried out in Ohio State University shows that students give good evaluation to lecturers who give them higher grades. Statistically, they tend to provide a lower rating for women teachers and foreign instructors. Critics of student evaluations do not disapprove of the students giving feedback in general but since many faculty administrators use the results as a basis for decision-making, concerning promotion, retention and tenure, many recommend more cautious use of such exercises.