Policy Forum
Apr 2011

How Comparative Effectiveness Research Feeds into Cost-Benefit Analyses

Kevin D. Frick, PhD
Virtual Mentor. 2011;13(3):248-250. doi: 10.1001/virtualmentor.2011.13.4.pfor2-1104.

 

The Patient Protection and Affordable Care Act emphasized comparative effectiveness rather than cost effectiveness or cost-benefit analysis. To understand both the distinction and the influence comparative effectiveness research (CER) may have on cost-benefit analysis, it is critical to recognize how comparative effectiveness is defined by the United States Department of Health and Human Services. Here are the definition and the statement of purpose from the Federal Coordinating Council for Comparative Effectiveness Research:

Comparative effectiveness research is the conduct and synthesis of research comparing the benefits and harms of different interventions and strategies to prevent, diagnose, treat and monitor health conditions in “real world” settings. The purpose of this research is to improve health outcomes by developing and disseminating evidence-based information to patients, clinicians, and other decision-makers, responding to their expressed needs, about which interventions are most effective for which patients under specific circumstances.

  • To provide this information, comparative effectiveness research must assess a comprehensive array of health-related outcomes for diverse patient populations and sub­groups.
  • Defined interventions compared may include medications, procedures, medical and assistive devices and technologies, diagnostic testing, behavioral change, and delivery system strategies.
  • This research necessitates the development, expansion, and use of a variety of data sources and methods to assess comparative effectiveness and actively disseminate the result [1].

Several aspects of this definition and statement of purpose make comparative effectiveness research more adaptable to cost-benefit analysis than typical randomized controlled trials (RCTs). First, the definition emphasizes that the results should be applicable in “real world settings.” Randomized trials don’t simulate real world settings. The degree to which RCTs follow protocol distinguishes them from typical clinical practice, and the inclusion and exclusion criteria for trials are often quite stringent—particularly with respect to comorbidities—so that patients in a randomized trial are often not representative of clinical patients, many of whom have comorbidities that make the treatment of the condition more complex. Often, randomized trial costs that are associated with strict adherence to protocols can be difficult to separate from the costs of the treatment or intervention itself. So the CER’s conduct of research in real world settings means that its findings will be more applicable to cost-effectiveness or cost-benefit studies than RCT results and of greater use to insurers, employers, governments, and individuals who pay for treatment in clinical settings.

The CER statement of purpose emphasizes that the results are intended to help patients, clinicians, and others make better decisions by knowing which interventions and treatments are most effective for whom. “For whom” refers to diverse patient populations who will be included in the research. This information will provide insights about incentives, resources, and response to incentives that pertain to specific demographic subgroups.

While effectiveness is important, decision makers realize that the resources (both time and money) to engage in a treatment regimen or intervention have to come from somewhere and are ultimately limited. There are some interventions and treatments that may be efficacious but that are just too expensive to obtain.

Finally, the CER definition states that research will draw upon multiple sources of data, e.g., claims, surveys, and studies with different designs, to inform end users about a range of outcomes. These results will also provide data for economic evaluation. In some cases, RCTs focus on a single outcome, which may be of interest to clinicians but which does not necessarily reflect the patients’ interests in an ideal manner or allow them to decide whether a proposed treatment is “worth it.”

Thus, the methods and outcomes of comparative effectiveness research seem to feed quite well into economic evaluations that other parties might find useful, even though the CER definition itself makes no mention of cost or cost effectiveness.

The omission appears to have been driven by the politics surrounding health reform. This policy was made after a flurry of accusations that health reform would spawn “death panels” whose members would decide whether patients received life-sustaining care. This was not a particularly accurate interpretation of end-of-life counseling, but it did publicize the fact that choices have to be made about the costs at which it makes sense to continue to provide care to a variety of patients.

It is important to acknowledge not only that “rationing” is necessary—there may be costs at which it is not economically warranted or, more crucially, feasible to continue to provide care—but also that we must determine who decides what funding is warranted for different patients in different situations. What those who use the term “death panel” failed to acknowledge is that the allocation of (scarce) resources occurs all the time—at the moment, it just happens in an ad-hoc manner, without transparency, based mainly on the doctor’s knowledge of the patient’s prognosis and ability to pay (that is, in most cases, the patient’s insurance coverage). Ultimately we have to face the question of whether to leave the decisions about resource allocation to the market or to regulators or to some combination of the two.

By avoiding mention of costs, the CER definition allowed legislators and regulators to sidestep the issue and escape immediate criticism from those who think a different distribution of authority between the market and regulation would be more appropriate. Unfortunately, the choice left decision makers without structure that could have been useful for all concerned. Moreover, failing to require comparative effectiveness research to include costs allows researchers to avoid collecting data for and analyzing what is, often, the most challenging aspect of the research. And it enables researchers to present outcomes that may not actually be useful to end users who (whether they would like to admit it or not) need to know how to prioritize the many possible uses of limited health care resources.

Fortunately, many researchers who are proposing or conducting comparative effectiveness research have recognized this shortcoming in the definition and have planned to collect and present cost or at least resource use data alongside the comparative effectiveness data. In some cases, there is a choice to omit cost-effectiveness results and instead just present cost or resource utilization information alongside effectiveness data, but even that will make the results more useful.

References

  1. Federal Coordinating Council for Comparative Effectiveness Research. Report to the President and Congress; June 30, 2009. US Department of Health and Human Services. http://www.hhs.gov/recovery/programs/cer/cerannualrpt.pdf. Accessed March 14, 2011.

Citation

Virtual Mentor. 2011;13(3):248-250.

DOI

10.1001/virtualmentor.2011.13.4.pfor2-1104.

The viewpoints expressed on this site are those of the authors and do not necessarily reflect the views and policies of the AMA.