In the Literature
Jul 2013

Testing the Incentive Power of Pay for Performance

Ali Irshad, MD, Matthew Janko, and Jacob M. Koshy
Virtual Mentor. 2013;15(7):587-591. doi: 10.1001/virtualmentor.2013.15.7.jdsc1-1307.


Jha AK, Joynt KE, Orav EJ, Epstein AM. The long-term effect of premier pay for performance on patient outcomes. N Engl J Med. 2012;366(17):1606-1615.

Instead of payment that asks, “How much did you do?” the Affordable Care Act clearly moves us toward payment that asks, “How well did you do?” and more importantly, “How well did the patient do?” - Dr. Donald Berwick, April 2011

In their 2012 paper, Jha et al. [1] describe the results of a 6-year pay-for-performance quality improvement initiative called the Premier Hospital Quality Incentive Demonstration (PHQID) project and discuss its implications for improving health care outcomes. In the PHQID project, hospitals were rewarded for delivering superior care, based on process measures, such as timing of antibiotic dosing, and outcome measures, such as survival at 30 days. Hospitals in the top 10 percent or 20 percent of performance and improvement measurements received annual bonuses of 2 percent or 1 percent, respectively, of Medicare payments.

The PHQID project closely approximates the Centers for Medicare and Medicaid Services’ value-based purchasing program (VBP), which began providing financial incentives to more than 3,500 hospitals for performance improvement in October 2012. Thus, results from the PHQID may be predictive of the VBP success and instructive about defining performance and achievement goals in the future.

In the Jha et al. study, the authors compared 30-day, risk-adjusted mortality rates at PHQID hospitals with rates at hospitals that reported outcomes without receiving financial incentives. The authors also performed subgroup analyses to determine whether the PHQID had a greater effect on hospitals with greater incentive to improve quality (i.e., hospitals that were poor performers at baseline) or greater capability to improve quality (i.e., hospitals with better financial standing).

The authors collected and analyzed Medicare Part A data for more than 6 million patients discharged from hospitals from 2003 through 2009. Patients from the 252 hospitals participating in the PHQID program were compared to those from 3,363 hospitals who participated in the concurrent Medicare Hospital Compare program, which entailed public reporting of outcomes without incentive payments. Their study examined 30-day mortality of patients who received a discharge diagnosis of acute myocardial infarction (AMI), congestive heart failure (CHF), pneumonia, and coronary-artery bypass grafting (CABG).

Jha et al. assessed 30-day, risk-adjusted mortality for each of the four diagnoses and for all four conditions combined. Each patient’s risk of death was adjusted using 29 patient comorbidities and hospital characteristics such as numbers of patients per hospital, teaching status, location (urban vs. rural), ownership (public vs. private, nonprofit vs. for-profit), region, financial margin, and the proportion of patients receiving Medicare. The analyses also evaluated three additional covariates of interest, including a calculation to reveal whether hospitals with higher proportions of Medicare patients would show greater improvements.

The authors report no significant difference in the 30-day, risk-adjusted mortality rate at PHQID and non-PHQID hospitals for all diagnoses combined and for each individual diagnosis considered separately [2]. At the start of the study, the mortality rate for all study conditions combined was approximately 12 percent for both groups, and it declined by approximately 0.04 percent in both groups each quarter during the study period [2]. At the end of the study period, CABG mortality was higher (4.12 percent) at Premier hospitals than at non-Premier hospitals (3.34 percent) [2]. Change in mortality rates for hospitals in each group that were poor performers at baseline did not differ significantly from top performers hospitals in either group [2].

Ultimately Jha et al. conclude that there is no statistical effect of pay for performance on 30-day mortality for AMI, CHF, CABG and pneumonia, based on comparison of data from PHQID and non-PHQID hospitals [2]. But this conclusion must be considered in light of the limitations of the study. Jha et al. acknowledge that, since the hospitals participating in the Premier HQID were “self-selected”, they are “potentially different from control hospitals” [3]. For example, 90 percent of PHQID hospitals were private non-profit institutions, compared with 61 percent of non-PHQID hospitals.

In the discussion section, the authors state, “Expectations of improvement outcomes from programs modeled on the Premier HQID should therefore remain modest”[4], a conclusion that is consistent with recent literature. Ryan found no evidence that PHQID affected 30-day mortality rates through mid-2006 [5], and this finding was confirmed by Glickman and colleagues for Premier hospitals participating in a disease registry for acute myocardial infarction [6]. In 2006, approximately 80 percent of HMO-purchaser contracts for over 100,000 hospitals nationwide included bonus or penalty for performance beginning in 2004 [7]. Thus it is unclear what percentage of PHQID and non-premier reporting hospitals had process or care improvement programs in place before the start of the present study in 2003, and readers are left to wonder whether improvement had already been at least partially realized within each group.

Given the conclusions of these recent publications, the present study encourages us to ask, “Are economic incentives the best motivation available to hospital systems for improving performance?” Biller-Andorno and Lee have suggested that perhaps outcome transparency and non-financial incentive schemes such as performance ranking are sufficient incentives to improve outcomes [8]. Kavanagh has also recently posited an interesting point: Institutions’ profits from low resource utilization if a sick patient dies before using costly services might more than offset the penalty for mortality imposed by pay-for-performance programs. But, he says, few institutions wish to have it known that they have a higher-than-expected rate of patient deaths [9].

However, the effectiveness of the economic incentive model may have simply been unproved by the PHQID project. The Quality and Outcomes Framework (QOF), a nationwide initiative in the United Kingdom that started in 2004, offers one possible alternative economic incentive model [10]. In this effort by the National Health Service (NHS), general practitioners agreed to tie increases in their income to performance as measured by 146 quality indicators, covering clinical care for 10 chronic diseases, organization of care, and patient experience. The QOF initiative agreed to increase funding for primary care by 20 percent over previous levels, allowing practices to invest in extra staff and technology.

The initial examination of performance data for the QOF initiative demonstrated that substantially increasing physicians’ pay based on their success in meeting performance measures was effective in improving quality of care. The 8,000 family practitioners in the study earned an average of £25,000 more by collecting nearly 97 percent of the points available [10]. The new GP contract as a whole cost £1.76 billion more than the NHS intended, but substantial improvements have been noted, particularly in the maintenance of disease registries and screening of risk factors for older patients with cardiovascular disease in the community [11]. This focus on rewarding primary care efforts demonstrates a contrast with the PHQID methodology. As Jha et al. show, the PHQID sought to consider 33 parameters, of which 4 were compared to non-rewarded outcomes and there was minimal focus on primary care through the PHQID. Lindenauer et al. offer further evidence that the PHQID may need to be re-evaluated as an incentive model; their study found that early gains in process quality had mostly dissipated after 5 years under the PHQID [12].

The U.K. example and Lindenauer et al. results suggest that it behooves pay-for-performance proponents in the U.S. to seek out additional models to identify an ideal method that, at the very least, improves mortality outcomes by enhancing the focus of incentives to include broader and earlier parameters (e.g., primary care).

Jha et al. demonstrate there is an overall decrease in 30-day, risk-adjusted mortality regardless of incentive, which may simply be the result of tracking and reporting outcomes. It is also possible that these findings indicate that the economic incentives necessary to truly motivate change remain unmet in the U.S. Alternatively, this pattern may demonstrate a hospital culture dedicated to improving care for patients, not for monetary reward, but to satisfy a professional obligation to serve the community. What Jha et al. regard as sobering findings for proponents of incentive-based health care improvements is possibly a propitious demonstration of the integrity of physicians and hospital care in this country.


  1. Jha AK, Joynt KE, Orav EJ, Epstein AM. The long-term effect of premier pay for performance on patient outcomes. N Engl J Med. 2012;366(17):1606-1615.
  2. Jha, et al., 1610.

  3. Jha, et al., 1613.

  4. Jha, et al., 1614.

  5. Ryan AM. Effects of the Premier Hospital Quality Incentive Demonstration on Medicare patient mortality and cost. Health Serv Res. 2009;44(3):821-842.
  6. Glickman SW, Ou FS, DeLong ER, et al. Pay for performance, quality of care, and outcomes in acute myocardial infarction. JAMA. 2007;297(21):2373-2380.
  7. Rodriguez S, Rafelson W, Rajput V. Premier pay for performance and patient outcomes. N Engl J Med. 2012;367(4):381.

  8. Biller-Andorno N, Lee TH. Ethical physician incentives--from carrots and sticks to shared purpose. N Engl J Med. 2013;368(11):980-982.
  9. Kavanagh KT. Premier pay for performance and patient outcomes. N Engl J Med. 2012;367(4):381-382; author reply 382-383.
  10. United Kingdom. National Health Services. Primary Medical Services: Quality and Outcomes Framework. Edinburgh: NHS Quality Improvement Scotland, 2007. Accessed June 18, 2013.

  11. Nicholas Timmins. Do GPs deserve their recent pay rise? BMJ2005;331(7520):800.

  12. Lindenauer PK, Remus D, Roman S, et al. Public reporting and pay for performance in hospital quality improvement. N Engl J Med. 2007;356(5):486-496.


Virtual Mentor. 2013;15(7):587-591.



The viewpoints expressed in this article are those of the author(s) and do not necessarily reflect the views and policies of the AMA.