Academic journal article International Education Studies

How Unstable Are 'School Effects' Assessed by a Value-Added Technique?

Academic journal article International Education Studies

How Unstable Are 'School Effects' Assessed by a Value-Added Technique?

Article excerpt

Abstract

This paper re-considers the widespread use of value-added approaches to estimate school 'effects', and shows the results to be very unstable over time. The paper uses as an example the contextualised value-added scores of all secondary schools in England. The study asks how many schools with at least 99% of their pupils included in the VA calculations, and with data for all years, had VA measures that were clearly positive for five years. The answer is - none. Whatever it is that VA is measuring, if it is measuring anything at all, it is not a consistent characteristic of schools. To find no schools with five successive years of positive VA means that parents could not use it as a way of judging how well their primary age children would do at age 16 in their future secondary school. Contextualised value-added (CVA) is used here for the calculations because there is good data covering five years that allows judgement of its consistency as a purported school characteristic. However, what is true of CVA is almost certainly true of VA approaches more generally, whether for schools, colleges, departments or individual teachers, in England and everywhere else. Until their problems have been resolved by further development to handle missing and erroneous data, value-added models should not be used in practice. Commentators, policy-makers, educators and families need to be warned. If value-added scores are as meaningless as they appear to be, there is a serious ethical issue wherever they have been or continue to be used to reward and punish schools or make policy decisions.

Keywords: value-added, school effectiveness, England, secondary schools

1. Introduction

1.1 Why Value-added is Used

Governments worldwide, education leaders, teachers and families would all like to be able to judge the performance of schools and teachers in terms of pupil attainment (Barber and Moursched 2007). They want to know how much schools and teachers contribute to pupil attainment, how well schools overcome differences between the socio-economic background of their intakes, and whether some schools are more effective than others with equivalent pupils. It is clear that the performance of schools and teachers cannot be accurately assessed in terms of the raw-score attainment of their pupils, since this may reflect merely the quality of the intake to the school. For example, grammar schools in England select those pupils at age 11 who are most likely to do well in formal qualifications at ages 14, 16 and beyond. If at age 16 the pupils in a grammar school get better qualifications than pupils in a nearby school that takes only those pupils not accepted for the grammar school, then this is evidence that the grammar school selected their pupils well. It is not evidence, of course, that the grammar school itself and its teachers performed better than in the other school. It is very possible that if the pupils had somehow been swapped between the schools at the start, while the schools and teachers remained the same, then the pupils now in the grammar school would have done worse. This is not to say that particular schools do not make a difference, but that a great deal of the difference between school outcomes is directly attributable to pupil intakes. And what is true about the pupil intakes to grammar and secondary-modern schools is also possible for all schools. Where a school is sited, its specialism, organisation and precise methods of allocating places to pupils mean that there is considerable variation in school intakes, in terms of prior pupil learning, and indicators of possible disadvantage (Gorard and Cheng 2011).

The value-added approach to judging school performance, which has grown in popularity and importance from the 1980s, was therefore a good idea (Rutter et al. 1979). Here, schools are judged by the progress that their pupils make during attendance at the school, not on their absolute levels of attainment (e. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.