Academic journal article Phi Delta Kappan

Performance Assessment and the New Standards Project - A Story of Serendipitous Success

Academic journal article Phi Delta Kappan

Performance Assessment and the New Standards Project - A Story of Serendipitous Success

Article excerpt

In trying to invent a system of 'tests worth teaching to,' the New Standards Project achieved serendipitous success by developing and propagating a model for helping teachers form communities of learners and inquirers, Ms. Spalding points out.

HISTORY offers countless examples of explorers like Christopher Columbus and Ponce de Leon who set out with one destination in mind but arrived at another. Many important scientific and technological advances have also resulted from apparent failures. In 1856, for example, William Perkin's failure to produce synthetic quinine resulted in his discovery of aniline purple, which gave birth to the modern dye industry. Nylon looked like an unpromising product until some DuPont chemists fooling around in the laboratory stretched it out and discovered a process that made it one of the most successful synthetic fibers ever. Spencer Silver, a 3M scientist, failed in his attempt to create a permanent adhesive. The substance he developed adhered only temporarily - and became the 'sticky' in today's ubiquitous 'sticky notes.'1 The history of exploration and discovery teaches us that failure and success are not absolutes and must be judged in context.

In a special section on performance assessment in the May 1999 issue of the Kappan, Edward Haertel claimed that the movement to develop large-scale, performance-based assessments has failed to bring about 'sweeping education reform,' as some of its proponents hoped and perhaps rashly promised.2 But is it possible that this movement, while failing to achieve one goal, has achieved others equally, if not more, important? In order to answer this question, it may be helpful to look closely at the case of one organization that aspired to develop 'tests worth teaching to' - the New Standards Project.

Of all the standards and assessment projects of the 1980s and 1990s, the New Standards Project (NSP) was unquestionably the most ambitious. A national coalition of approximately 17 states and seven urban school districts, co-directed by Lauren Resnick of the Learning, Research, and Development Center of the University of Pittsburgh and Marc Tucker of the National Center on Education and the Economy in Washington, D.C., the New Standards Project explicitly aimed to "create tests worth taking."3

Throughout the first half of the 1990s, the NSP sponsored hundreds of meetings involving thousands of educators in the crusade to build assessments toward which teachers would want to teach.4 Thousands of students in grades 4, 8, and 10 completed New Standards performance assessment tasks and compiled portfolios to demonstrate their ability to meet performance standards that were being developed simultaneously.

New Standards research and development units were housed at various sites across the country, including the University of Pittsburgh and the University of California, Berkeley. The Literacy Unit eventually came to be housed at the National Council of Teachers of English (NCTE) in Urbana, Illinois. Its co-directors were Miles Myers, then executive director of NCTE, and David Pearson, then dean of the College of Education at the University of Illinois, Urbana-Champaign. From 1991 to 1996, I served as onsite coordinator of the Literacy Unit of the New Standards Project.5 Today, New Standards has scaled back its vision considerably. It devotes most of its resources to developing and marketing "reference examinations," which are more efficient to administer and score than its earlier products.

Large, national meetings are rare. By many measures, the New Standards Project looks like a classic case of an education reform that failed.

In fact, the experiences of New Standards and several of its partner states and districts serve as the primary sources for some of the major critiques of large-scale performance assessment presented by Haertel and other authors in the May 1999 Kappan. These critiques include: 1. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.