Benchmarking and performance measurement are increasingly "hot" subjects among public administrators. A brief historical summary of events illustrates this point. In 1992 the Urban Institute and the International City/County Management Association (ICMA) jointly published the book How Effective Are Your Community Services? Procedures for Promoting Results. In 1992 the American Society for Public Administration (ASPA) promoted performance measurement by approving a Resolution Encouraging the Use of Performance Measurement and Reporting for Government Organizations. At the federal level, in 1993 Congress adopted the Government and Performance and Results Act requiring all federal agencies to develop one-year performance plans and five-year strategic plans. In 1994 the Governmental Accounting Standards Board (GASB) circulated for comment a proposal that local governments include measures of service effort and accomplishment in their external reporting.
Among research-based efforts sparked by the performance measurement drive are two projects that attempt to develop uniform measures so that managers can compare the performance of different city and county governments. In 1993, the Large City Executive Forum, an association of city managers in jurisdictions with more than a 200,000 population, joined with ICMA to form the Comparative Performance Measurement Consortium. Initially comprising of 34 jurisdictions, the group eventually grew to 44 cities and counties. The managers in the consortium initially decided to compare performance in four service areas: fire, police, neighborhood services, and support services.
In contrast to ICMA's national project, the second project is limited to a single state, North Carolina. In 1994 the budget director of Winston-Salem, concerned about the inaccuracy of intergovernmental service comparisons being made among the large North Carolina cities, proposed to the North Carolina Local Government Budget Association that interested members of the association undertake a performance measurement project. These governments, working with the staff of the Institute of Government (IOG) at the University of North Carolina at Chapel Hill, began an extensive effort in 1995.
This article compares the methodology used in the ICMA and IOG projects. The results from each project suggest strongly that benchmarking performance results among local governments may be more difficult than theorized. Successfully comparing costs (inputs) requires that a comprehensive cost accounting system be created. The IOG project established such a system, collecting direct costs, indirect costs, equipment costs, and facilities costs. In contrast, the ICMA project collected only direct costs. Likewise, measuring outcomes is surprisingly difficult because definitions differ widely. Both these benchmarking efforts suggest ways of overcoming these difficulties. This article will begin with a brief overview of each benchmarking effort. It will then look at the often unexpected obstacles they encountered and how they attempted to surmount them. It will conclude with some broader lessons that these experiences suggest for local government benchmarking.
The IOG Project's Methodology
After the IOG project was proposed in North Carolina, representatives from several large cities and counties and staff from the Institute of Government, the North Carolina League of Municipalities, and the North Carolina Association of County Commissioners formed a steering committee to prepare a project proposal. In 1995 the IOG hired a project coordinator. Seven large cities, seven large counties, and 14 medium-sized and small cities and counties agreed to participate in the project (the participation fee was $3,000). The steering committee divided the project into three phases. Phase 1, which included the seven large cities, was completed in September 1997; Phases 2 and 3, which included the large counties and smaller governments, ended in 1998. …