The public sector has come a long way in measuring performance and "managing for results," but there is much still to do. Delving even modestly into the history of public management yields the discovery that performance measurement and related management initiatives have been encouraged for many years. In that respect, it has been a long haul. The seemingly slow pace, however, should not blind us to the progress that has been made.
Consider changes that have occurred over the last 25 years in attitudes toward government performance or even the possibility of creditable performance by the public sector. In the 1970s a lot of people considered government productivity to be an oxymoron, about as illogical as word pairs like "alone together," "peace offensive," or "fresh prunes." Some wisecracking pundits even likened government productivity to the Loch Ness monster. There were occasional sightings reported, they said, but nothing confirmed.
In the 1970s and early 1980s, it was difficult to generate much interest in performance comparison of two or more government units. "We are unique!" government officials said of their organization and its environment. "Our conditions are different; our service demands are different" (usually meaning greater than the demands faced by any of our counterparts); "comparison would be meaningless."
As great as resistance was to intergovernmental performance comparison, there was even less interest in trying to adapt private sector practices. "Impractical!" proponents of adaptation were told. "Naive!" they were called.
As the century draws to a close, optimism is evident among recent converts to the measure-monitor-and-improve school of public service as well as among veterans of that movement. More than 700 people from all levels of government gathered in Austin, Texas, in 1998 for the third in a series of conferences dedicated to celebrating the successes of results-oriented management in the public sector, even as they challenged each other to do more.
The Loch Ness comparisons have faded. The sightings of government productivity are more frequent--and they are confirmed. Government agencies have established performance standards, many of which are directed toward meeting the expectations of service recipients, and service has improved in documentable ways.
Governments themselves are more receptive to performance comparisons, if they are done properly. Reasons for this newfound receptivity are not certain, but two explanations are plausible. One possibility is that government officials eventually resigned themselves to the inevitability of cross-unit comparisons. News reporters love per capita expenditure comparisons--crude as such comparisons are and devoid as they are of any sensitivity to differences among governments in the scope or quality of services, much less any differences in cost-accounting systems. "If comparisons are inevitable," some government officials may have said, "let's see if we can do them properly." By declaring their intention to "do them properly," these officials announce their resolve to report differences in the quality of service among governments and rectify accounting disparities in calculating unit costs of service delivery.
A second possibility is that government officials had their eyes opened by the corporate experience in benchmarking. Among the pioneers in the benchmarking movement were Motorola, IBM, AT&T, Alcoa, DEC, and Milliken; but none of these pioneers enjoys a more prominent role in that history than does the Xerox Corporation. In perhaps the most repeated story in benchmarking lore, Xerox confronted its own unsatisfactory performance in product warehousing and distribution. It did so not by employing the then-conventional methods of process revision or redesign, but instead by identifying the organization it considered to be the very best at warehousing and distribution, in hopes that "best practices" could be adapted from the exemplar's model. …