Academic journal article Health Care Financing Review

Financial Gains and Risks in Pay-for-Performance Bonus Algorithms

Academic journal article Health Care Financing Review

Financial Gains and Risks in Pay-for-Performance Bonus Algorithms

Article excerpt


The burgeoning research on the sizable geographic variation in surgery rates (Wennberg et al., 1999; Weinstein et al., 2004; 2006), the prevalence of medical errors, and the generally unacceptable quality of care in a variety of settings (Chassin et al., 1998; Institute of Medicine, 2001) has motivated both public and private health insurers to incorporate financial incentives for improving quality into their payment arrangements with care organizations. Both risk and reward (i.e., carrot and stick) approaches are being used (Bokhour et al., 2006; Epstein, 2006; Trude, Au, and Christianson, 2006; Williams, 2006; Fisher, 2006; Rosenthal and Dudley, 2007; Center for Health Care Strategies, 2007). Payors may simply provide an add-on or allow higher updates to a provider's fees or they may pay an extra amount whenever a desired service is performed (e.g., a $10 payment for a mammogram). These are part of a reward (carrot) strategy. Alternatively, payors may reduce payments or constrain fee updates for unacceptable quality performance--the risk (stick) strategy. A hybrid of the two approaches involves self-financing quality bonuses. Under a self-financing scheme, as with Michigan Medicaid's Health Plan Bonus/Withhold system (Center for Health Care Strategies, 2007), payors pay for quality improvements out of demonstrated savings generated by providers or managed care organizations.

P4P arrangements use financial incentives to engender changes in patient care processes that, in turn, are expected to lead to improved health outcomes. Evidence-based patient care studies have produced a list of care processes that lead to better outcomes (National Committee for Quality Assurance, 2006; Agency for Healthcare Research and Quality, 2006; National Quality Forum, 2006; Institute of Medicine, 2006). Much less attention has been given to the payout algorithms themselves. Yet, how the incentives are structured may be as or more important than the quality indicators (QIs) in encouraging quality improvements.

In this article, we first present several possible P4P payment models and their key parameters. As part of this exercise, we highlight the effects of the number of indicators on bonus levels, how they are weighted, and how targets are set. We then simulate actual quality performance against a pre-set target and test the sensitivity of a plan's expected bonus and degree of financial risk to different bonus algorithms and key parameters. Finally, we conclude by suggesting steps that payors should follow in designing P4P incentive programs.


Many private and State Medicaid P4P programs use a simple payment scheme that pays a fixed amount for providing a quality-enhancing service (e.g., mammograms, a primary care visit). Service-specific P4P payment is narrow, however, and is not adequate to encourage higher quality in managing the chronically ill. One likely risk model underlying an insurer's expected bonus payout across several P4P indicators is based on an organization's actual performance relative to a target rate. (In some P4P models, organizations must pay back up front case management fees if quality targets are not met. The modeling and results are easily recast in a penalty framework.) In most cases, a target rate, t, is determined as an improvement over the local baseline rate, [[lambda].sub.base], i.e.,

(1) [t.sub.ip] = [[lambda].sub.base,ip] (1 + [[alpha].sub.ip]

where [[alpha].sub.ip] = the required rate of improvement over baseline for the i-th indicator in the p-th plan. Using a local population baseline rate serves as a control for varying risk factors. The rate of improvement might be set unilaterally by the payor or negotiated with the plan. The patient care organization or disease management plan is assumed to have formed its own expected level of performance, E[[[lambda].sub.ip]], based on a likely rate of quality improvement, E[[[rho]. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed


An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.