Academic journal article Australian Journal of Education

Uses and Misuses of Student Opinion Surveys in Eight Australian Universities

Academic journal article Australian Journal of Education

Uses and Misuses of Student Opinion Surveys in Eight Australian Universities

Article excerpt

Student opinion surveys (SOSs) are commonly used in universities to measure student perceptions of teaching performance. Ostensibly their prime purpose is to improve teaching quality. This paper critically examines SOSs in eight Australian universities where survey design and analysis were examined and compared with literature recommendations. We find that current SOSs are neither designed nor structured according to sound questionnaire technique and that, as part of a teaching evaluation system, they are seriously flawed. Deficiencies include: their use as the sole measure of teaching effectiveness, the tendency for universities to rely on unmoderated student opinion without tempering the results with contextual factors, and a lack of testing for reliability and validity that renders the data of unknown precision. We argue that, at present, SOSs expose teachers to unreliable, invalid opinions that influence teacher career advancement and job security.


Increased competition between universities in attracting student fee revenue, greater requirements for accountability, and reduced government funding have led to a prolific use of student opinion surveys (SOSs) in Australian tertiary institutions. In these surveys, students `evaluate' teachers and course units (referred to as subjects) by completing survey instruments. The surveys contain questions about the teacher's perceived performance, and the perceived worthiness of aspects of the subject. In Australia, the surveys became widely used in universities in synchrony with total quality management initiatives in the late 1980s. A decade later total quality management receives diminished emphasis whereas SOSs are firmly entrenched; some university students are surveyed as many as eight times each year.

The surveys provide a convenient vehicle for staff appraisal, often making direct input to hiring, promotion and tenure decisions. Lecturers might take careful note of survey results and modify their teaching accordingly, perhaps both in style and content, to improve their `ratings'. Such modification could be desirable, if indeed teaching is improved, but it may be undesirable, if pleasing students is at the expense of objectives such as learning.

For professional purposes, surveys that have not been checked for reliability or validity must be regarded as unusable. For example, Marsh (1987) advises that `criterion measures that lack reliability or validity should not be used as indicators of effective teaching for research, policy formation, feedback to faculty or administrative decision making' (p.286). People who design, collect, analyse, report and use student opinion surveys should be aware of the validity and reliability of the data.

The aim of this study is to compare present survey practices of Australian universities with the current state of knowledge available in the academic literature. We commence with considerations of how one can be sure that the survey data are meaningful, and proceed with a brief summary of whether or not SOSs might be beneficial.

Survey reliability and validity

If an instrument is reliable and valid, the observed score on some aspect X (denoted [X.sub.0]) will equal the underlying, or `true' score (denoted [X.sub.T]). However, random ([X.sub.R]) and systematic ([X.sub.S]) errors distort the measurement:

[X.sub.0] = [X.sub.T] + [X.sub.R] + [X.sub.S]

It is the researcher's job to reduce or eliminate error, and to be aware of the likely magnitude of the error. Both random and systematic errors must be small in comparison with [X.sub.T] if meaningful observations are to be obtained (Churchill, 1979). Further, the absence of one source of error does not guarantee absence of the other.

[X.sub.R] results from spurious uncontrolled factors, so that the same instrument administered under nearly identical circumstances yields different results. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed


An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.