Virtual Mentor. April 2009, Volume 11, Number 4: 301-305. Clinical Pearl Shared Decision Making Requires Statistical LiteracyPhysicians have an ethical responsibility to be functionally literate in health statistics and able to explain information such as a test’s positive predictive value to their patients.Chandra Y. Osborn, PhD, MPH The movement toward evidence-based medicine has emphasized the integration of clinical expertise, patient values, and the best evidence (clinical research based on sound methodology) in the decision-making process for patient care [1, 2]. Identifying the best evidence requires physicians to have new skills, including the ability to search the literature efficiently, apply formal rules to evaluate research, and understand health statistics. Learning Objective Understand why physicians have an ethical responsibility to be functionally literate in health statistics and able to explain information such as a test’s positive predictive value to their patients.
Gigerenzer et al. have coined the term “statistical illiteracy” to describe the widespread difficulty in understanding, interpreting, and communicating health statistics [1]. Shared decision making is a cornerstone of evidence-based medicine that requires a level of statistical literacy on the part of physicians, who have an increased responsibility to communicate numerical information effectively to patients. An example will make this clear. Let’s take prostate cancer as a case in point. Prostate cancer is the most common cancer in American men, with an estimated 186,320 new cases and 28,660 deaths in 2008 [3]. About 1 man in 6 will be diagnosed with prostate cancer during his lifetime, but only 1 in 35 will die from the disease [3]. Screening for prostate cancer remains controversial, due to insufficient evidence to recommend or oppose screening [4, 5]. Although many medical and professional organizations agree that patients should be involved in the decision to undergo screening, studies show that, prior to screening, physicians often give patients little or no information about the test and its implications [2, 3, 5-12]. The reason for this is that few physicians are prepared to explain the test’s positive predictive value to patients. A panel of national experts and patients has developed a list of 10 facts men should know before giving consent to PSA screening [13]. One of these facts is that false-positive PSA results can occur (when the PSA level is elevated, but there is no cancer). Sheridan et al. found that 24 percent of patients were unaware of the potential for inaccurate test results [14]. Prior to engaging patients in a shared decision-making discussion, urologists should know a man’s chance of actually having prostate cancer if he test positive in his PSA. Although one might assume that every physician knows the answer, Hoffrage et al. suggest that many experts, including physicians, have difficulty making sense of health statistics [15]. Faculty, staff, and students at Harvard Medical School were asked to estimate the probability of a disease given the following information: if a test to detect a disease whose prevalence is 1/1,000 has a false-positive rate of 5 percent, what is the chance that a person found to have a positive result actually has the disease, assuming that you know nothing about the person’s symptoms or signs [15, 16]? The estimates varied wildly, ranging from the most frequent estimate, 95 percent (given by 27 out of 60 participants), to the correct answer, 2 percent (given by 11 out of 60 participants) [15, 16]. A separate study showed that physicians confuse the sensitivity of a test (the proportion of positive test results among individuals with the disease) with its positive predictive value (the proportion of individuals with the disease among those who receive a positive test result) [15]. Gigerenzer et al. illustrate the widespread problem of statistical illiteracy using various examples, one of which has been modified here [1]. Assume you want to perform a PSA screening test on a patient who lives in a specific region of the country. You know the following information about men in this region:
During the pre-screening discussion with this patient, he asks you what the chances are of having prostate cancer if the test comes back positive. What is the best answer? A. The probability that he has prostate cancer is about 81 percent. B. Out of 10 men with a positive PSA test, about 9 have prostate cancer. C. Out of 10 men with a positive PSA test, about 1 has prostate cancer. D. The probability that he has prostate cancer is about 1 percent. The best answer is “C”—one out of every 10 men who test positive in screening actually has prostate cancer. The other nine are false alarms [1]. The answer can be derived from the health statistics provided. Health statistics are commonly framed in a way that tends to cloud peoples’ minds [1]. The information is presented in terms of conditional probabilities—which include the sensitivity and the false-positive rate (or 1 specificity) [1]. Presenting the information in terms of natural frequencies can foster greater insight [1, 15, 17, 18]. Here, following Gigerenzer et al., is the same information from the above problem translated into natural frequencies [1]. Assume you want to perform a PSA screening test on a patient who lives in a particular area of the country. You know the following information about men in this region:
How can this simple change in representation turn innumeracy into insight? Natural frequencies facilitate computation and represent the way humans encode information [1, 16]. Unlike relative frequencies and conditional probabilities, they are simple counts that are not normalized with respect to base rates [17, 19]. A fundamental problem in health care is that many physicians do not know the probabilities that a person has a disease given a positive screening test—that is, the positive predictive value [1]. Nor are they able to estimate it from the relevant health statistics when they are framed in terms of conditional probabilities, even when this test is in their area of specialty [18]. Careful training on how to translate probabilities into natural frequencies is needed [15]. The following four steps have been proposed [15]:
ConclusionFraming information in a way that is most readily understood by the human mind is the first step toward educating doctors, and ultimately patients, in risk literacy [1]. Prior to PSA screening, patients should know the risks and benefits associated with the test, and the implications of a positive result. Physicians, in turn, have an ethical responsibility to be functionally literate in health statistics when delivering that information to patients. Given that false-positive test results have been linked to increased cancer-related worry and problems with sexual function, effective discussion about inaccurate test results is needed prior to screening [20].
References
Chandra Y. Osborn, PhD, MPH, is an assistant professor of medicine and a research investigator at the Center for Health Services Research in the Division of General Internal Medicine & Public Health and the Eskind Diabetes Center at Vanderbilt University in Nashville, Tennessee. Dr. Osborn received an MA and PhD in social/health psychology and a graduate certificate in quantitative research methods from the University of Connecticut. She was a health-services research fellow at Northwestern University Feinberg School of Medicine, where she also completed a master’s degree in public health. Her research focuses on understanding population-specific determinants of health-behavior change and on testing frameworks to inform the design and content of tailored health-promotion interventions for chronically ill patients. Related in VM
The viewpoints expressed on this site are those of the authors and do not necessarily reflect the views and policies of the AMA.
© 2009 American Medical Association. All Rights Reserved. |