Way back in 1989, James Q. Wilson defined "coping organizations" as those in which managers can neither observe the activities of frontline workers nor measure their results. Police departments were perfect examples, as supervisors could not watch cops on patrol or easily gauge their crime-fighting effectiveness. As a result, agencies had to enforce rigid policies and procedures as the only way to manage their staff.
Then, in the 1990s, New York City introduced CompStat, and this equation changed forever. The NYPD compiled and continuously updated reams of crime data, which were used to identify hot spots and problem areas. In weekly meetings, precinct commanders were held accountable for quickly addressing crime spikes. Suddenly "management by results" became possible--not just in the Big Apple, but in police departments nationwide.
But something else also happened in the '90s: video cameras were installed in thousands of patrol cars all across the country. The rationale was simple: people who got pulled over could be told that they were under surveillance, making dangerous behavior during traffic stops less likely. Moreover, if cops knew that they, too, were being observed, they would be less likely to engage in brutality or unjust searches. Maybe their supervisors couldn't ride along with them, but video cameras could serve as partial surrogates.
Wilson also pointed to schools as prime examples of coping organizations. "A school administrator," he wrote, "cannot watch teachers teach (except through classroom visits that momentarily may change the teacher's behavior) and cannot tell how much students have learned (except by standardized tests that do not clearly differentiate between what the teacher has imparted and what the student has acquired otherwise)."
As with police, education reformers have spent the last two decades trying to change these assumptions. On the "managing by results" side, there has been the big battle over the use of test data for accountability purposes (CompStat for schools), culminating in the fight over value-added measurement of teacher performance. Perhaps now we can finally "differentiate between what the teacher has imparted and what the student has acquired otherwise." Yet even advocates acknowledge the imperfections of this approach. What if a teacher gets great results in student learning, but does it by "teaching to the test," or, worse, cheating? What if she ignores important parts of the curriculum that aren't easily assessed? Or, on the flip side, what if her value-added scores show lackluster student progress, but it's due to factors completely outside her control?
Understandably, teachers and their unions don't want test scores to count for everything; classroom observations are key, too. …