AMA Journal of Ethics®

Illuminating the art of medicine

Journal of Ethics Header

AMA Journal of Ethics®

Illuminating the art of medicine

Virtual Mentor. December 2004, Volume 6, Number 12.

Clinical Cases

  • Print

Confusion over Cholesterol Testing

Physicians must stay up-to-date on changes in clinical treatment guidelines.

Commentary by Kenneth Goodman, PhD, and Eta S. Berner, Ed D

Mr. Fitzpatrick has made an appointment for a general physical exam. At 31 years of age, Mr. Fitzpatrick remains active in his work in information technology, but he has not seen Dr. Anderson since his early 20s. In other words, during today's visit the 2 have a lot to talk about.

A thorough discussion of Mr. Fitzpatrick's family history does not reveal any significant conditions or diseases occurring at abnormally young ages in his relatives (eg, heart disease or cancer). The physical exam is likewise unremarkable, except for slightly elevated blood pressure on 2 separate readings (138/89 mm Hg and 140/89 mm Hg). Dr. Anderson counsels Mr. Fitzpatrick on several lifestyle changes that could affect his blood pressure, including reducing his salt intake and increasing exercise. Dr. Anderson frequently sees patients with elevated cholesterol in his family practice clinic, so he is quite cognizant of the need to monitor HDL- and LDL-cholesterol levels as well as triglycerides. He asks Mr. Fitzpatrick to return for a fasting blood test to check his total cholesterol, HDL, and triglycerides. Though reluctant because of work demands, Mr. Fitzpatrick agrees.

Later that afternoon, Dr. Anderson relates his actions to Dr. Monde, one of his practice partners. She is surprised to learn that he routinely screens patients in this age range for elevated cholesterol. In response, Dr. Anderson cites guidelines published in 2001 by the National Heart, Lung and Blood Institute's National Cholesterol Education Program (http://www.nhlbi.nih.gov/guidelines/cholesterol/index.htm). Showing her these guidelines online, he notes that they recommend screening with fasting lipid profiles every 5 years, beginning at age 20, for both men and women. Somewhat perplexed, Dr. Monde recalls that her last reading of the United States Preventive Services Task Force guidelines (issued in 2001) implied that screening for patients who, like Mr. Fitzpatrick, do not have any major risk factors for heart disease, need not begin until 35 years of age for men or 45 years for women. She also wonders whether the test must be a fasting one.

To help settle their debate, they turn to the National Guideline Clearinghouse, which offers a synthesis of clinical guidelines regarding lipid screening in adults. Their disagreement is confirmed, but they are dismayed to find several other organizations offering their own, sometimes conflicting, guidelines. Both physicians wish that a better method existed to keep them up-to-date with these clinical guidelines—after all, both guidelines are now 3 years old—and more importantly, to know what to do in these complex, conflicting situations.

Unaware of any conflict, Mr. Fitzpatrick arrives the next morning for his fasting blood test.

Commentary 1

by Kenneth W. Goodman, PhD

There are few greater challenges than applying probabilistic data when the stakes are very high. Get it right and no one notices; get it wrong, and, it seems, everyone does. This has been true for millennia. It was true for the physicians of ancient Greece, and it is true for the most ably educated, experienced, and wired physicians of the 21st century. There is no escape from the need to make a judgment, a decision, under some amount of uncertainty.

Not all decisions raise ethical issues. High stakes at the racetrack matter much less than high stakes in the clinic or surgical suite. Because we value life and its handmaiden, health, there is an unequivocal moral duty to base medical decisions on the best available evidence. Surely this is uncontroversial; indeed, it would be perverse to suggest otherwise.

That was the easy part. Matters become difficult when we make 3 simple but crucial observations. First, there is quite a lot of evidence (and there has been for some time); second, some of this evidence is not very good; and, third, reasonable people often disagree about how best to distinguish the strong or good evidence from the other stuff.

One universally acknowledged source of good evidence is the randomized controlled trial, widely regarded as the gold standard for evidence to be applied by physicians and other health professionals. Such experiments are valuable precisely because of their design, which, when applied carefully, reduces confounding effects, identifies statistically significant correlations, and increases confidence in the reliability of findings. While the clinical trial as we know it is just over 50 years old, by the 1970s it was seen to be such a reliable source of biomedical knowledge that it was scandalous that more physicians did not know about those studies that bore directly on the clinical challenges they faced. The British epidemiologist Archie Cochrane gave us the leitmotif that has accompanied the recent evolution of evidence-based medicine:

It is surely a great criticism of our profession that we have not organised a critical summary, by specialty or subspecialty, adapted periodically, of all relevant randomised controlled trials [1].

This is, at ground, the very point of clinical practice guidelines.

Our work would be done—the problem solved—if biomedical science, human biology, and the other organisms, toxins, and sharp objects that cause it to come to grief were simple. None of this is simple. Randomized studies of the same phenomenon sometimes conflict, and their data are not always analyzed correctly or reported with adequate rigor. (The gold standard sometimes needs polishing.) Moreover, clinical trials often reveal subgroup variation or point to our ignorance about groups that were not included in the research in sufficient numbers. Research reduces uncertainty; it rarely eliminates it.

It should therefore not surprise us that practice guidelines are themselves works in progress. It would be nothing less than extraordinary if a guideline got it right for all patients and for all time. Why ever would we presume that the moral imperative to do systematic research and make study results easily available would eliminate a feature of all scientific inquiry, namely, that knowledge rarely accretes in a straight line, or that empirical "closure" is once and for all?

That Drs Anderson and Monde disagree over when to commence cholesterol screening may be taken to represent a lack of an unequivocal standard of care, a case of scientific uncertainty, and a challenge to individual physicians who want to "follow the evidence" but find out that matters are not as tidy as they would like. It would be a mistake, however, to believe that this is a new problem for clinicians, or that there is something about taking Professor Cochrane's advice that makes their lives more difficult than before the rise of evidence-based medicine. Medical science has always been probabilistic—and doctors have always sought ways to reduce the consequent uncertainty; what is new is that we now have a means for laying bare this uncertainty and for empowering physicians to apply reasoned judgment to sort out the conflicts [2].

While society, through governments and professional organizations, has a duty to synthesize the best evidence into easily accessible guidelines, this does not relieve clinicians of the duty to know what the guidelines recommend and how different guidelines might make different recommendations. This points to a further moral imperative—seek out continuing medical education. To suggest that conflict among guidelines is reason enough to ignore them all is a little like saying that differences among teams is a reason not to go to the game.

Moreover, the difficult task of managing this uncertainty is only in part a problem of deploying evidence in patient care. In the same way that unresolved differential diagnoses impose a cognitive challenge on physicians and a challenge for them to communicate better with patients about risks, benefits, and alternatives, conflicts among guidelines are a "physician-only" problem solely under paternalistic models of the doctor-patient relationship. That is, uncertainty is a shared or collective problem, and astute physicians know that shared decision making is the correct response. In the case of when to begin Mr. Fitzpatrick's lipid screening, Mr. Fitzpatrick himself needs to be in the loop of uncertainty: He needs to know that guidelines differ; he needs to know the risks (if they be such) of tardy screening and the waste (if it be that) of premature screening; he needs to be asked about his preferences and his tolerance for risk.

This is partly what is meant by "the informed consent process." It is an ethical standard that helps resolve the clinical challenge of making a decision under uncertainty. Why ever should a clinician alone shoulder the burden of uncertainty when the rules for consent make clear that patients need to be able to weigh the risks and benefits of their treatments? In fact, sound evidence-based practice needs to include patient values at the outset:

Evidence-based medicine…is the integration of best research evidence with clinical expertise and patient values…. By patient values we mean the unique preferences, concerns and expectations each patient brings to a clinical encounter and which must be integrated into clinical decisions if they are to serve the patient [3].

Here, too, we are not eliminating uncertainty, but managing it. In interventions ranging from hangnail-ectomies to neurosurgery, the question, "What are the risks?" has always been answered with a list of probabilities spanning the spectrum from just more than 0 to just less than 1. The incorporation of practice guidelines, including conflicting practice guidelines, into clinical decision making surely cannot be insulated from the kinds of uncertainty that already shape clinical practice—and have shaped it since the beginning.

Moreover, the evidence-based project has itself evolved to include mechanisms for self-correction. For instance, concerns about the quality and reliability of scientific publications of randomized controlled trial results have led to the CONSORT (Consolidated Standards of Reporting Trials) statement by a group of leaders in medical publishing, clinical trial research and other fields [4]. CONSORT should be seen as an uncertainty-reducing effort. It is thereby also a confidence-increasing initiative. Critical evaluation of medical journal reports remains a duty for clinicians, but efforts such as CONSORT can ease the burdens of meeting that obligation.

Mr. Fitzpatrick is in good hands. He would be somewhat better off, however, if his doctor were more attuned both to his preferences and to the reasons for guideline variation and less inclined to dismay at the woof and warp of the process by which medical knowledge accumulates.



References

  1. Cochrane AL. 1931-1971: A critical review, with particular reference to the medical profession. In: Medicines for the Year 2000.London: Office of Health Economics; 1979:1-11.
  2. Goodman KW. Ethics and Evidence-Based Medicine: Fallibility and Responsibility in Clinical Science.Cambridge:Cambridge University Press; 2003.
  3. Sackett DL, Straus SE,Richardson WS, et al. Evidence-Based Medicine: How to Practice and Teach EBM. 2nd ed.Edinburgh: Churchill Livingstone; 2000:1.
  4. Moher D, Schulz KF, Altman D, for the CONSORT Group. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomized trials. JAMA. 2001;285:1987-1991.

Kenneth W. Goodman, PhD, is director of the University of Miami Bioethics Program.

Commentary 2

by Eta S. Berner, EdD

Clinical practice guidelines can be thought of as systems that support clinical decision making. The issues raised in this case by the use of traditional clinical guidelines could apply as well to the use of other, more novel clinical decision support systems (CDSS), such as computerized tools designed to assist clinician decision making. CDSS usually link patient-specific information with a knowledge base to provide alerts for dangerous orders, suggest diagnoses, recommend laboratory tests, or critique treatment plans. Some of these systems are integrated into an electronic medical record, and others are stand-alone systems that can be consulted by the physician.

In this particular case, Dr. Anderson made the decision to have Mr. Fitzpatrick return for a fasting cholesterol test based on his recollection of a particular guideline. Should he have relied as he did on his "unaided memory" when there was an information source available? Does the physician have an ethical obligation to the patient to use such systems, especially if they could be accessed conveniently at the point of care? Conversely, are there risks to the patient in using these systems, especially, as in the example in the case, when there may be conflicting information about the appropriate choice of action?

Rationale for Using CDSS

Reviewing the guidelines as Dr. Anderson did after the patient left the office was more effective than spending time surfing the Web while the patient was in the office. A good CDSS, however, might be able to display a synthesis of evidence in a format that would make it easier to apply, even in the short time available for an office visit. PDAs are gaining popularity among physicians and are easily used in an outpatient setting to access drug databases, clinical prediction rules, clinical guidelines, and other forms of decision support. While there is clear evidence that CDSS can reduce medical errors [1], at least in an inpatient setting, there are also studies showing that the CDSS advice is frequently ignored. Eccles and his colleagues in England integrated evidence-based guidelines into physician office computer systems [2]. Their research showed no change in the quality of care after this effort, but they also found that the guidelines were rarely read, much less actively used. Hsieh et al studied the use of allergy-drug interaction alerts and found that 80 percent were overridden by the clinicians who received them [3].

Part of the reluctance to use available systems may lie in the dilemma faced by the physicians in this case—that the evidence is conflicting or that the unique clinical situation justifies overriding a generic guideline. But there are other reasons the advice is not followed—reasons that may not be as clinically or ethically justifiable. Tamblyn et al found that when CDSS recommended a change in medication prescribed by another physician, physicians were reluctant to follow CDSS advice, but they were more comfortable changing their own treatment plans on that advice [4]. As the evidence base for clinical decision making improves, and the technology to make that evidence easily accessible during the clinical encounter becomes available, it is likely that the standard of care may require use of these systems. There is even legal precedent relating to the use of technology in maritime law that, if found applicable to health care, could hold physicians liable for failure to use available technology, even if such use is not yet part of the standard of care [5].

Risks of Using CDSS

The evidence for the benefits of CDSS in reducing harm to patients has led the Leapfrog Group to make clinical decision support in physician order entry systems a cornerstone of its patient safety agenda [6]. Yet there are also risks in using this technology. If Drs Anderson and Monde decided to use automated guidelines, which guidelines should they use? When guidelines are fully or mostly integrated into an order entry system, the source of the recommendations may not be known to the user. The invisibility of the source may be particularly problematic when the data conflict, as in this case.

Knowledge base maintenance may also be a problem in at least 2 respects. First, in this case, some of the guidelines were up to 4 years old, and it was not always clear whether the guidelines were not updated because the evidence behind them had not changed (and hence, the guidelines were still valid) or whether the evidence had changed and the guidelines had not been revised to reflect that evidence. Second, while accurate physician documentation is an ethical and legal imperative, documentation in clinical charts is often far from complete and may not be accurate. When this incomplete information forms the input for clinical decision support systems, even a well validated system may not produce accurate results, potentially putting patients at risk for harm. Hsieh et al found that many clinician overrides of CDSS drug-allergy alerts were attributable to inaccurate allergy lists in the paper chart as well as in the automated system [3]. One growing concern is over-reliance on a faulty system because the clinician may fail to recognize when the system is incorrect. Tsai et al found that residents' interpretations of electrocardiogram (ECG) data when they had access to an automated decision support system that gave an incorrect interpretation were worse than their unaided performance [7]. Because legally and ethically the physician—and not the computer—is still the decision maker, clinicians must be able to judge how to identify appropriate sources for clinical decision support.

Guidelines for Use of CDSS

The Health on the Net Foundation has developed a code of ethical practices for Web sites that provide health information [8]. Many of these guidelines apply to clinical decision support systems as well, eg, the knowledge base should be credible, based on best evidence, and updated regularly, and the source of the data should be clear to the user. To prevent users from relying on obsolete information, at least one of the drug databases available for PDAs provides regular updates and disables its system if the user does not access the updates. The systems should be well validated, and the American Medical Informatics Association and other organizations have advocated local software review committees to monitor performance within each organization to assure that systems are performing properly in the local environment [9]. There should be a mechanism for physicians to override the system, but they should also be required to document the reason for the override, so that the reasons can be reviewed, discussed if necessary, and ultimately be used to improve the system itself.

Finally, the CDSS must be integrated into the workflow, or else even the best system will not get used. In this case, the Web site that compared the guidelines was useful for the doctors to study, but was probably too cumbersome, with too much information, to integrate at the point of decision making. CDSS fail to be used routinely if it takes too much time to enter data or to read and digest the information they provide. Also, too frequent alerts for inappropriate reasons can result in the important suggestions getting ignored.

In summary, Drs Anderson and Monde can be optimistic that there are technologies that can help keep them up-to-date, but they will need to be aware of the strengths and limitations of these systems to properly avail themselves of these potentially life-saving sources of information.


References

  1. Bates DW, Gawande AA. Improving safety with information technology. N Engl J Med. 2003;348:2526-2534.
  2. Eccles M, McColl E, Steen N, et al. Effect of computerised evidence based guidelines on management of asthma and angina in adults in primary care: cluster randomized controlled trial. BMJ. 2002;325:941-944.
  3. Hsieh TC, Kuperman GJ, Jaggi T, et al. Characteristics and consequences of drug-allergy alert overrides in a computerized physician order entry system. J Am Med Inform Assoc. 2004;11:482-491.
  4. Tamblyn R, Huang A, Perreault R, et al. The medical office of the 21st century (MOXXI): effectiveness of computerized decision-making support in reducing inappropriate prescribing in primary care. CMAJ. 2003;169:549-556.
  5. The T.J. Hooper, 60 F 2nd 737 (2d Cir 1932).
  6. The Leapfrog Group. Leapfrog Fact Sheet. Available at: http://www.leapfroggroup.org/about_us/leapfrog-factsheet. Accessed October 18, 2004.
  7. Tsai TL, Fridsma DB, Gatti G. Computer decision support as a source of interpretation error: the case of electrocardiograms. J Am Med Inform Assoc. 2003;10:478-483.
  8. Health on the Net. HON Code of Conduct (HONcode) for medical and health Web sites. Available at: http://www.hon.ch/HONcode/Conduct.html. Accessed October 18, 2004.
  9. Miller RA, Gardner RM. Summary recommendations for responsible monitoring and regulation of clinical software systems. Ann Intern Med. 1997;127:842-845.

Eta S. Berner, EdD, is professor in the Health Informatics Program in the Department of Health Services Administration in the School of Health Related Professions at the University of Alabama at Birmingham. She is a former vice chair of the AMIA Ethical, Legal and Social Issues Working Group.

The people and events in this case are fictional. Resemblance to real events or to names of people, living or dead, is entirely coincidental. The viewpoints expressed on this site are those of the authors and do not necessarily reflect the views and policies of the AMA.