With the nation about to embark on an ambitious program of high-stakes testing of every public school student, we should review our experience with similar testing efforts over the past few decades so that we can we benefit from the lessons learned and apply them to the coming generation of tests. The first time that there was a large-scale commitment to accountability for results in return for government financial assistance was in the 1960s, with the beginning of the Title I program of federal aid to schools with low-income students. The fear then was that minority students, who had long been neglected in the schools, would also be shortchanged in this program. The tests were meant to ensure that the poor and minority students were receiving measurable benefits from the program. Since that time, large-scale survey tests have continued to be used, providing us with a good source of data to use in to determine program effects and trends in educational achievement.
Critics of testing often argue that the test scores can sometimes provide an inaccurate measure of student progress and that the growing importance of the tests has led teachers to distort the curriculum by "teaching to the test." In trying to evaluate these claims, we need to look at the types of data that are available and their reliability. In other words, what we know and how we know it. For example, when people claim that there is curriculum distortion, they are often relying on surveys of teachers' perceptions. These data are useful but are not the best form of evidence if policymakers believe that teachers are resisting efforts to hold them accountable. More compelling evidence about the effects of testing on teaching can be obtained by looking directly for independent confirmation of student achievement under conditions of high-stakes accountability. Early studies revealed very quickly that the use of low-level tests produced low-level outcomes. When students were evaluated only on simple skills, teac hers did not devote time to helping them develop higher-order thinking skills. This was confirmed in the well-known A Nation at Risk report in the early 1980s and about a decade later in a report from the congressional Office of Technology Assessment.
In 1991, I worked with several colleagues on a validity study to investigate more specifically whether increases in test scores reflected real improvements in student achievement. In a large urban school system in a state with high-stakes accountability, random subsamples of students were given independent tests to see whether they could perform as well as they had on the familiar standardized test. The alternative, independent tests included a parallel form of the commercial standardized test used for high-stakes purposes, a different standardized test that had been used by the district in the past, and a new test that had been constructed objective-by-objective to match the content of the high-stakes test but using different formats for the questions. In addition to content matching, the new test was statistically equated to the high-stakes standardized test, using students in Colorado where both tests were equally unfamiliar. When student scores on independent tests were compared to results on the high-stakes accountability test, there was an 8-month drop in mathematics on the alternative standardized test and a 7-month drop on the specially constructed test. In reading, there was a 3-month drop on both the alternative standardized test and the specially constructed test. Our conclusion was that "performance on a conventional high-stakes test does not generalize well to other tests for which students have not been specifically prepared."
At the same time that researchers addressed the validity of test score gains, studies have also been done to examine the effect of high-stakes accountability pressure on curriculum and instructional practices. …