There has been much debate about the usefulness of state assessment as an instrument for reforming educational practice. Opponents of measurement-driven reform assert that high-stakes assessment creates negative side effects such as dumbing down the curriculum, de-skilling teachers, pushing students out of school, and generally inciting fear and anxiety among both students and educators (Darling-Hammond & Wise, 1985; Gilman & Reynolds, 1991; Jones & Whitford, 1997; Madaus, 1988a, 1988b; Shepard, 1989). According to the opponents of measurement-driven reform, these side effects outweigh any possible benefits of measurement-driven reform.
Proponents of measurement-driven reform have argued that "if you test it, they will teach it" and that assessment can guide the educational system to be more productive and effective (Popham, 1987). These proponents of measurement-driven reform add that the recent development of performance-based assessment offers a technology for assessing higher-order skills and deeper understanding of content. This development improved the early, and often maligned, minimum competency tests that used only multiple-choice items (Baron & Wolf, 1996; Bracey, 1987a, 1987b; Rothman, 1995).
Past research has emphasized the negative consequences of high-stakes attached to test results. In the 1980s and early 1990s, such high-stakes created pressures that encouraged teachers to place unprecedented omphasis on drill-based instruction, narrowing of content, and the regurgitation of facts (Corbett & Wilson, 1991; Smith, 1991). In addition, substantial teaching time was lost in test preparation activities--i.e., learning the test formats rather than additional content. However, the tests that were studied by educational researchers were multiple-choice, basic-skills-oriented tests, not the newer performance-based assessments (Firestone, Mayrowetz & Fairman, 1998).
Performance-based assessments test student knowledge differently than multiple-choice, basic-skills tests. Multiple-choice tests only require students to fill in an oval; performance-based assessments require students to show their knowledge by constructing a response--i.e., writing an essay or showing how they solve a mathematical problem. This form of testing assesses higher level thinking skills while the other tests memorization (Rothman, 1995).
Although the test format has changed, from all multiple-choice questions to some or all constructed response questions, the use of stakes as a way to exert significant influence on classroom learning and instructional practices has remained constant. These stakes have included incentives such as cash awards to schools or individual teachers who demonstrate high levels of student performance. They also have included consequences for schools, individual teachers, and students; these consequences include public reporting of test results, prevention of grade-to-grade promotion and high school graduation, and possible takeover of schools that continue to demonstrate low levels of student performance. These incentives and consequences are all based on one thing- the test score. But do these test results influence instructional practices? If so, in what manner? The point of this study is to answer these questions.
PURPOSE OF THE STUDY
The purpose of this study was to determine if the public release of student results on high-stakes, state-mandated performance assessments influence instructional practices, and if so, in what manner. This study was designed to answer the following questions:
In what manner do the student results from high-stakes, state-mandated performance assessments influence instructional practices?
Additional guiding questions include the following:
1. Have teachers changed their instructional practices since the public release of high-stakes, state-mandated student performance assessment scores? …