Statistical Methods for Profiling Providers of Medical Care: Issues and Applications

Article excerpt


Profiling medical care providers on the basis of quality of care and utilization of resources is rapidly becoming a widely used analysis in health care policy and research (Epstein 1995; Green and Wintfeld 1995; Hannan et al. 1994; Kassirer 1994; Landon et al. 1996; McNeil, Pedersen, and Gatsonis 1992; Salem-Schatz, 1994). Although comparative performance measures of health care were proposed as early as 1916 (Codman 1916), their use became widespread only recently. The results of profiling analyses often have far-reaching implications. They are used to generate feedback for health care providers, to design educational and regulatory interventions by institutions and government agencies, to design marketing campaigns by hospitals and managed care organizations, and, ultimately, to select health care providers by individuals and managed care groups. The recent trend of compiling and making available "report cards" for hospitals and individual health care practitioners has brought unprecedented public scrutiny to the practice of medicine. The effects of such scrutiny are undoubtedly complex and will unfold over time. However, the methodology for generating the reports needs more immediate attention (Epstein 1995; Localio et al. 1995).

Profiling is the process of comparing quality of care, use of services, and cost with normative or community standards. For example, hospital readmission rates within 2 weeks of discharge may be compared to a norm based on national rates. The profiling process normally includes a risk-adjustment step intended to account for possible differences in patient case mix (Iezzoni 1994; Landon et al. 1996; Salem-Schatz et al. 1994). In addition to a large body of work in medical research, the methodologic aspects of risk-adjustment have been extensively discussed in the literature on observational studies (see Rosenbaum 1995 and references therein). But the essence of profiling analysis lies in developing and implementing performance indices to evaluate medical care providers, such as physicians, hospitals, and care-providing networks. In this article we propose a class of measures for provider performance based on the posterior probability that a provider's patients have an unusually high frequency of adverse events. Our measures are derived from the fit of hierarchical regression models.

A major initiative to evaluate hospital performance in the United States was launched by the Health Care Financing Administration (HCFA) in 1987 with the annual release of hospital-specific data comprising observed and expected mortality rates for Medicare patients. Hospitals observed to have higher-than-expected mortality rates were flagged as institutions with potential quality problems. HCFA derived mortality rates by estimating a patient-level model of mortality for disease-based cohorts using administrative data. The expected hospital-specific mortality rates were calculated by averaging the model-based estimated probabilities of mortality within each hospital over the hospital's patient population. HCFA's approach is typical of many published profiling analyses, and it is mainly for this reason that we discuss it in some detail in this article.

The public release of hospital-specific performance data was suspended in 1994, primarily as a result of the inadequacy of HCFA's administrative databases to provide the necessary detail for case mix adjustment and also the lack of information on patient compliance (Berwick 1990; Kassirer 1994). To remedy the problem, HCFA began a new initiative to carry out streamlined, in-depth data collection on several disease-specific patient cohorts. A subset of this newly collected information forms the dataset analyzed in this article. But our approach is designed to address several methodological concerns about HCFA's approach to profiling beyond the inadequacy of case mix adjustment. First, because of differences in hospital sample size, the precision of the hospital-specific estimates may vary greatly. …