The Problem with Standardized Assessment: There Are Other, Better Ways Than High-Stakes Testing to Hold Institutions Accountable for Making Good on the Promises of Higher Education

Article excerpt

BY NOW WE'VE ALL HEARD that the Commission on the Future of Higher Education may recommend standardized tests as a way to compare and rank institutions. Such tests would likely attempt to measure general reasoning and communication skills. The commission's intention is undoubtedly good, but can such an endeavor be successful?

To create a valid test, one has to know what questions it will answer. Perhaps we want to measure critical thinking and effective writing, since those are essential job skills. Can you really evaluate the many types of critical thinking with a single instrument? Is writing a computer program sufficiently similar to analyzing Homer in that they are simultaneously measurable in 30 minutes with a number-two pencil?

At Coker College (S.C.), we addressed this problem with a Faculty Assessment of Core Skills, which evaluates analytical and creative thinking, effective speaking, and writing. Assessing how well a student thinks is complex and subjective. So we aggregate the opinions of all course instructors. Validity is checked against grades, portfolios, and even library circulation history.

While this method works great for a small liberal arts college, it would not be a candidate for a national metric of achievement. It does, however, show that a single external definition of a skill like critical thinking is undesirable and unnecessary. It's proper for the government to define "legally drunk," but not "legally intelligent."

Aside from the problematic specifics of a national test, its very existence would pose problems. Suppose this hypothetical institutional report card were used to allocate or restrict federal aid in the name of accountability. Institutions would try to maximize their scores by whatever clever means they could devise--with huge incentives for test preparation and outright cheating. AS for students, the test would wind up measuring enthusiasm for a "not-for-a-grade-while-I-could-be-playing-my-XBox" test.

Small colleges like Coker would have to quickly figure out how to best prepare and motivate students. We would be at a competitive disadvantage; richer institutions could simply go buy the "solutions" that the testing industry would eagerly offer. …