Academic journal article
By Tashlik, Phyllis
Phi Delta Kappan , Vol. 91, No. 6
"The sharp separation often seen in the literature between qualitative and quantitative methods is a spurious one."
"The conventions of standardized testing have become so widely accepted that many evaluators cannot think of assessment based on project-based individualized education."
--Robert E. Stake, director, Center for Instructional Research and Curriculum Evaluation, University of Illinois, Urbana-Champaign
Given the narrowed focus of the current conversation about assessment, it's hard to conceive how it could possibly be moved to change. From Secretary of Education Arne Duncan to state commissioners, chancellors, mayors, and the press, the language of quantitative measures has dominated the "conversation." "Assessment" has come to mean really one thing only: the numerical results of standardized tests.
Social scientists have long cited the critical shortcomings of quantitative indicators. As Donald T. Campbell wrote, in what has now become a truism for those involved with public policy, "The more any quantitative social indicator is used for social decision making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor" (1976).
Despite Campbell's warning and the varieties of corrupting influences that have been documented across the states, more and more school systems rely on an excessive series of standardized exams to determine an ever-widening range of education decisions: promotion and graduation requirements for students, bonuses for principals, tenure and salaries for teachers, ratings for schools, and school closings. We run the risk of "valuing what we measure [rather than] measuring what we value" (Biesta 2009). The possibilities for other options--performance-based or qualitative or project-based, whatever they may be called--have become severely restricted if not totally eliminated.
But one group of New York public high schools has managed to defy the odds. The staffs of these schools have survived the era of "one size fits all" by creating and sustaining an entire performance-based assessment system. Perhaps the conversation can still be changed.
A group of 30 public schools in New York, working together as the New York Performance Standards Consortium since 1998, have been successfully graduating students using a performance-based assessment option (in addition to the New York State Regents exam in English Language Arts). They have been at the forefront of changing the conversation about assessment and redefining "data" and how best to use it to engage both students and teachers in the complex tasks of teaching and learning.
Researchers have shown that consortium graduates succeed in college, achieving a higher GPA than the national norm, placing into credit-bearing courses (thus avoiding the expensive but uncredited remedial classes), and outpacing national rates for returning to college as second-year students. This achievement is all the more impressive when considering that they entered high school with lower scores on English and math state exams, a higher percentage of students in special education, and higher rates of poverty when compared to the overall New York City school population (Foote 2007).
Consortium schools include urban public high schools in New York City, Rochester, and Ithaca. Their assessment system meets the New York State Standards for Learning. Instead of exit exams, student assessments are based on specific performance-based assessment tasks (PBATs) that grow out of the schools' curricula. With students expected to demonstrate college-level skills, the curriculum must be challenging as well as engaging. Too often, in systems governed solely by standardized exit exams, the curriculum becomes overwhelmingly test-prep, a series of interim assessments and assignments that mirror the standardized exam. …