Money Improves Test Scores - Even State-Level SATs

Article excerpt

Test scores offer endless opportunities for misinterpretation. When SAT scores rose in 1983, former Secretary of Education Terrel Bell ascribed the rise to the impact of A Nation at Risk, the report he had commissioned that had appeared in April 1983. Then someone pointed out that the scores released in the fall of 1983 actually came from tests taken in 1982, meaning that A Nation at Risk would have had to act backward in time - something considered possible only by some theoretical physicists.

Sometimes the misinterpretations are deliberate - and deliberately amusing. Looking at test scores a number of years ago, Sen. Daniel Patrick Moynihan (D-N.Y.) concluded that the strongest factor in predicting high test scores was proximity to Canada. States along the border tended to score very well. In a similar vein, Harold Howe II expressed his surprise when SAT scores rose in 1983. He said that he thought he had detected a strong relationship between declining SAT scores and the expansion of McDonald's restaurants. Clearly, eating greasy hamburgers and tallow-laden French fries impaired thinking. Then Howe recalled that in 1982 McDonald's had introduced the fish sandwich - and we all know that fish is brain food.

More startling conclusions have been drawn from looking at state-level SAT scores, and these conclusions have produced some very misleading policy statements. In 1981 an enterprising reporter called every state department of education, obtained the state's average SAT scores, and then ranked the states from 1 to 50. When the results were published, all hell broke loose. It was one thing to see Iowa and Minnesota among the top-ranked states. It was quite another to see Mississippi and Alabama up there near the top. Much mischief ensued, and the College Board reluctantly began to put out state-level data itself.

A couple of years later, Brian Powell of Indiana University and Lala Carr Steelman of the University of South Carolina took to the pages of the Harvard Educational Review to show that the culprit producing the strange rankings was the percentage of seniors taking the SAT in each state. Mississippi and Alabama ranked high because only a tiny elite took the SAT, while in some states in the East the SAT administration was an outing for most of the senior class.

Powell and Steelman are back again in the spring 1996 issue of Harvard Educational Review - advising us that the mischief persists and trying to do something about it. No fewer than three former secretaries of education - William Bennett, Lauro Cavazos, and the late Terrel Bell - along with Ralph Reed of the Christian Coalition and Washington Post pundit George Will are all discovered to have used state-level SAT scores in recent years to tout the notion that money for schools doesn't matter.

This is a notion that I tried to demolish in articles in the November 1995 issue of Educational Leadership and the May 1996 issue of The School Administrator. In those articles I noted that Bennett's plumping of this idea constituted a monumental act of dishonesty. When he was secretary of education, his office continued to put out the "wall charts" initiated by Bell. But even those useless charts divided the states into two categories: states in which the SAT was dominant and states in which the ACT was the test of choice. Thus Bennett must know better than most people that the SAT is not a valid measure of anything for all states.

Some commentators have even fallen into the trap of arguing that increasing money for schools decreases achievement. In their 1996 article, Powell and Steelman try to set things straight with a new analysis that looks at more factors. As with their earlier study, they find this time that there is a strong relationship between the percentage of high school seniors who take the SAT and the state ranking: the higher the percentage, the lower the ranking. The proportion of students taking the SAT accounts for 85% of the differences between states. …