State of the Art and Science
Apr 2009

Shared Decision Making Requires Statistical Literacy

Chandra Y. Osborn, PhD, MPH
Virtual Mentor. 2009;11(4):301-305. doi: 10.1001/virtualmentor.2009.11.4.cprl1-0904.

 

The movement toward evidence-based medicine has emphasized the integration of clinical expertise, patient values, and the best evidence (clinical research based on sound methodology) in the decision-making process for patient care [1, 2]. Identifying the best evidence requires physicians to have new skills, including the ability to search the literature efficiently, apply formal rules to evaluate research, and understand health statistics.

Gigerenzer et al. have coined the term “statistical illiteracy” to describe the widespread difficulty in understanding, interpreting, and communicating health statistics [1]. Shared decision making is a cornerstone of evidence-based medicine that requires a level of statistical literacy on the part of physicians, who have an increased responsibility to communicate numerical information effectively to patients. An example will make this clear. Let’s take prostate cancer as a case in point.

Prostate cancer is the most common cancer in American men, with an estimated 186,320 new cases and 28,660 deaths in 2008 [3]. About 1 man in 6 will be diagnosed with prostate cancer during his lifetime, but only 1 in 35 will die from the disease [3]. Screening for prostate cancer remains controversial, due to insufficient evidence to recommend or oppose screening [4, 5]. Although many medical and professional organizations agree that patients should be involved in the decision to undergo screening, studies show that, prior to screening, physicians often give patients little or no information about the test and its implications [2, 3, 5-12]. The reason for this is that few physicians are prepared to explain the test’s positive predictive value to patients.

A panel of national experts and patients has developed a list of 10 facts men should know before giving consent to PSA screening [13]. One of these facts is that false-positive PSA results can occur (when the PSA level is elevated, but there is no cancer). Sheridan et al. found that 24 percent of patients were unaware of the potential for inaccurate test results [14]. Prior to engaging patients in a shared decision-making discussion, urologists should know a man’s chance of actually having prostate cancer if he test positive in his PSA.

Although one might assume that every physician knows the answer, Hoffrage et al. suggest that many experts, including physicians, have difficulty making sense of health statistics [15]. Faculty, staff, and students at Harvard Medical School were asked to estimate the probability of a disease given the following information: if a test to detect a disease whose prevalence is 1/1,000 has a false-positive rate of 5 percent, what is the chance that a person found to have a positive result actually has the disease, assuming that you know nothing about the person’s symptoms or signs [15, 16]? The estimates varied wildly, ranging from the most frequent estimate, 95 percent (given by 27 out of 60 participants), to the correct answer, 2 percent (given by 11 out of 60 participants) [15, 16]. A separate study showed that physicians confuse the sensitivity of a test (the proportion of positive test results among individuals with the disease) with its positive predictive value (the proportion of individuals with the disease among those who receive a positive test result) [15].

Gigerenzer et al. illustrate the widespread problem of statistical illiteracy using various examples, one of which has been modified here [1]. Assume you want to perform a PSA screening test on a patient who lives in a specific region of the country. You know the following information about men in this region:

  • The probability that a man has prostate cancer is 1 percent (prevalence).
  • If a man has prostate cancer, the probability that he tests positive is 90 percent (sensitivity).
  • If a man does not have prostate cancer, the probability that he nevertheless tests positive is 9 percent (false-positive rate).

During the pre-screening discussion with this patient, he asks you what the chances are of having prostate cancer if the test comes back positive. What is the best answer?

A. The probability that he has prostate cancer is about 81 percent.

B. Out of 10 men with a positive PSA test, about 9 have prostate cancer.

C. Out of 10 men with a positive PSA test, about 1 has prostate cancer.

D. The probability that he has prostate cancer is about 1 percent.

The best answer is “C”—one out of every 10 men who test positive in screening actually has prostate cancer. The other nine are false alarms [1]. The answer can be derived from the health statistics provided.

Health statistics are commonly framed in a way that tends to cloud peoples’ minds [1]. The information is presented in terms of conditional probabilities—which include the sensitivity and the false-positive rate (or 1 specificity) [1]. Presenting the information in terms of natural frequencies can foster greater insight [1, 15, 17, 18]. Here, following Gigerenzer et al., is the same information from the above problem translated into natural frequencies [1]. Assume you want to perform a PSA screening test on a patient who lives in a particular area of the country. You know the following information about men in this region:

  • Ten out of every 1,000 men have prostate cancer.
  • Of these 10 men with prostate cancer, 9 test positive.
  • Of the 990 men without prostate cancer, about 89 nevertheless test positive.

How can this simple change in representation turn innumeracy into insight? Natural frequencies facilitate computation and represent the way humans encode information [1, 16]. Unlike relative frequencies and conditional probabilities, they are simple counts that are not normalized with respect to base rates [17, 19].

A fundamental problem in health care is that many physicians do not know the probabilities that a person has a disease given a positive screening test—that is, the positive predictive value [1]. Nor are they able to estimate it from the relevant health statistics when they are framed in terms of conditional probabilities, even when this test is in their area of specialty [18]. Careful training on how to translate probabilities into natural frequencies is needed [15]. The following four steps have been proposed [15]:

  1. Select a population and use the base rate to determine how many individuals in the population have the disease.
  2. Take that result and use the test’s sensitivity to determine how many individuals have the disease and test positive.
  3. Take the remaining number of healthy individuals and use the test’s false-positive rate to determine how many individuals do not have the disease but still test positive.
  4. Compare the number obtained in step 2 with the sum of those obtained in steps 2 and 3 to determine how many individuals with a positive test actually have the disease.

Conclusion

Framing information in a way that is most readily understood by the human mind is the first step toward educating doctors, and ultimately patients, in risk literacy [1]. Prior to PSA screening, patients should know the risks and benefits associated with the test, and the implications of a positive result. Physicians, in turn, have an ethical responsibility to be functionally literate in health statistics when delivering that information to patients. Given that false-positive test results have been linked to increased cancer-related worry and problems with sexual function, effective discussion about inaccurate test results is needed prior to screening [20].

References

  1. Gigerenzer G, Gaissmaier W, Kurz-Milcke E, Schwartz LM, Woloshin S. Helping doctors and patients make sense of health statistics. Psychol Sci Pub Int. 2008;8(2):53-96.
  2. Sackett DL, Haynes RB. The architecture of diagnostic research. BMJ. 2002;324(7336):539-541.
  3. American Cancer Society. American Cancer Society guidelines for the early detection of cancer. 2008. http://www.cancer.org/docroot/ped/content/ped_2_3x_acs_cancer_detection_guidelines_36.asp. Accessed March 4, 2009.

  4. Barry MJ. Clinical practice. Prostate-specific-antigen testing for early diagnosis of prostate cancer. N Engl J Med. 2001;344(18):1373-1377.
  5. Burack RC, Wood DP Jr. Screening for prostate cancer. The challenge of promoting informed decision making in the absence of definitive evidence of effectiveness. Med Clin North Am. 1999;83(6):1423-1442, vi.

  6. Hoffman RM, Helitzer DL. Moving towards shared decision making in prostate cancer screening. J Gen Intern Med. 2007;22(7):1056-1057.
  7. Katz A, Sisler JJ. What’s a doctor to do? Helping patients decide about prostate cancer screening. Can Fam Physician. 2004;50:14-16, 22-24.

  8. Purvis Cooper C, Merritt TL, Ross LE, John LV, Jorgensen CM. To screen or not to screen, when clinical guidelines disagree: primary care physicians’ use of the PSA test. Prev Med. 2004;38(2):182-191.
  9. Dunn AS, Shridharani KV, Lou W, Bernstein J, Horowitz CR. Physician-patient discussions of controversial cancer screening tests. Am J Prev Med. 2001;20(2):130-134.
  10. Diefenbach PN, Ganz PA, Pawlow AJ, Guthrie D. Screening by the prostate-specific antigen test: what do the patients know? J Cancer Educ. 1996;11(1):39-44.

  11. Federman DG, Goyal S, Kamina A, Peduzzi P, Concato J. Informed consent for PSA screening: does it happen? Eff Clin Pract. 1999;2(4):152-157.

  12. Gattellari M, Ward JE. Men’s reactions to disclosed and undisclosed opportunistic PSA screening for prostate cancer. Med J Aust. 2005;182(8):386-389.
  13. Chan EC, Sulmasy DP. What should men know about prostate-specific antigen screening before giving informed consent? Am J Med. 1998;105(4):266-274.

  14. Sheridan SL, Felix K, Pignone MP, Lewis CL. Information needs of men regarding prostate cancer screening and the effect of a brief decision aid. Patient Educ Couns. 2004;54(3):345-351.
  15. Hoffrage U, Lindsey S, Hertwig R, Gigerenzer G. Medicine. Communicating statistical information. Science. 2000;290(5500):2261-2262.
  16. Casscells W, Schoenberger A, Graboys TB. Interpretation by physicians of clinical laboratory results. N Engl J Med. 1978;299(18):999-1001.
  17. Gigerenzer G, Hoffrage U. Overcoming difficulties in Bayesian reasoning: A reply to Lewis and Keren (1999) and Mellers and McGraw (1999). Psychol Rev. 1999;106(2):425-430.
  18. Hoffrage U, Gigerenzer G. Using natural frequencies to improve diagnostic inferences. Acad Med. 1998;73(5):538-540.
  19. Gigerenzer G, Hoffrage U. How to improve Bayesian reasoning without instruction: Frequency formats. Psychol Rev. 1995;102(4):684-704.
  20. Katz DA, Jarrard DF, McHorney CA, Hillis SL, Wiebe DA, Fryback DG. Health perceptions in patients who undergo screening and workup for prostate cancer. Urology. 2007;69(2):215-220.

Citation

Virtual Mentor. 2009;11(4):301-305.

DOI

10.1001/virtualmentor.2009.11.4.cprl1-0904.

The viewpoints expressed in this article are those of the author(s) and do not necessarily reflect the views and policies of the AMA.