Biases in Marking Students'
Written Work: Quality?
Neil D. Fleming
We build a huge edifice based on the grades and marks given by teachers in higher education, yet there is serious doubt about the validity and the reliability of those marks. Recently, higher education has had a great deal of rhetoric, discussion and energy about the measurement of quality. While there have been a number of initiatives in the 1990s that have focused on the meaning of 'quality' and how it might be enhanced, audited or assessed, the business of marking student scripts still remains as the most significant quality event in the lives of the students and the academics. It is at this early stage that the system has to have integrity. If not, then everything that is built on that fragile base of a set of marks will collapse. If we as academics cannot hold our heads up and say that in the process of marking student scripts we have been thoroughly professional, then the statements made by deans, provosts, and vice chancellors about quality are in error. Clark (1993), in a study about blind marking, states that,
Given that students are subject to the assessment process, it is a funda-
mental requirement that the assessment process be reliable and valid,
and perhaps above all else, equitable. Notwithstanding these laudable
ideals, educators should be aware that the assessment process is not as
scientific as it may sound. In fact, the assessment process is subject to
error and bias.
This chapter identifies the sources of bias in the marking of students' scripts. It examines some of the research literature and it suggests ways of lessening the effects of the bias where those exist. Where this is not possible, biases are better acknowledged than hidden and academic staff should have opportunities to discuss the biases that will not 'go away'. The word 'marking' is used throughout this paper synonymously with the word 'grading'.