Academic journal article Education

Teacher Leaders at Work: Analyzing Standardized Achievement Data to Improve Instruction

Academic journal article Education

Teacher Leaders at Work: Analyzing Standardized Achievement Data to Improve Instruction

Article excerpt

Introduction and Literature Review

A half century ago, standardized achievement test scores were primarily used for (a) informing teachers and parents about students' achievement relative to their peers, (b) helping place students in appropriate programs, and (c) justifying the allocation of supplemental resources. However, advances in the technology of standardized test taking, combined with the popular belief that testing improves student achievement, has led to uses of the standardized test results in ways not originally intended. Concerned educators have warned that some of these uses, e.g., evaluating schools, teachers, and as a requirement for grade promotion, are invalid and can have a negative impact on student learning (Popham, 2001a, 2001b).

Consonant with the increasing public pressure on schools to increase student achievement has been the increasing use of standardized achievement test scores to inform instruction and curriculum. For example, test makers (e.g., Hoover et al., 2003) suggest that by comparing the student, classroom, or building scores with local and national norms, teachers can identify individual or group strengths and weaknesses for the purpose of adjusting the curriculum. The efficacy of this approach has been disputed by Popham (2001a, 2001b) who argues (a) that the descriptions of knowledge and skills on standardized tests are not clear enough to provide a focus for improving instruction and (b) that classroom assessments are the best source of data for informing instruction.

Despite these caveats, there has been a proliferation of articles and texts addressing the analysis of student achievement data for the purpose of improving learning (e.g., Anderson et al., 2004; Killion, 2002; Johnson, 2002; Streifer, 2002; Bernhardt, 1998). All of these authors offer alternatives to the approaches historically associated with educational research, e.g., random samples and control groups. The first and most common approach is analyzing trends, i.e., determining whether an instructional intervention has positively influenced student achievement over time (Streifer, 2002; Johnson, 2002; Bernhardt, 1998). A second approach is to disaggregate the data, i.e., group achievement scores by the students' ethnicity, gender, SES, or performance and then make comparisons. For example, a quartile analysis can compare subgroups by examining the percentages of students falling into the bottom, the two intermediate, and the top quartiles. This analysis can compare the progress of different student subgroups and could indicate whether instructional practices favor a particular group or groups of students (Johnson, 2002; Streifer, 2002). A third approach is to examine the relationship between student achievement scores and other indicators of student performance, e.g., grades, attendance, or discipline interventions (Johnson, 2002; Streifer, 2002).

These school-based approaches to data analysis have not been widely taught in university-based research courses, in part because faculty have been trained in research methods more commonly employed in a university context. Unfortunately, this can mean the data analysis skills needed to lead a district through the school improvement process may not be included in graduate programs. The emphasis is often on research designs and analytical procedures that are not relevant to the environment of the practitioner:

 
   ... In all too many instances, statistics 
   are taught in a theoretically 
   rarefied atmosphere replete with 
   hard-to-understand formulas and too 
   few examples relevant to the daily 
   life of education practitioners 
   (Bracey, 1997, p. 2). 

Shifting to a more practice-based instruction has been increasingly recommended by national organizations, advisory groups, task forces, and accreditation agencies (Shakeshaft, 1999: Murphy & Forsyth, 1999). …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.