Dr. Andreas nervously tapped her foot as she presented to her attentive specialty society audience. A practicing ob-gyn herself, Dr. Andreas was a doctoral student in health policy at an academic health center who, for the past several years, had been working on a study comparing vaginal births with c-sections in two neighboring towns. Now she was presenting the results of her soon-to-be-published work to the physicians whose patients had made up the study groups, and she was not sure how they would receive the implications of her findings.
“According to the data,” Dr. Andreas concluded, “there are no statistically significant medical, social, financial, or other demographic differences between the women in the two towns who were part of this study. Yet women in the first town were three times more likely to undergo c-sections when delivering their children than women in the second town, after age of the women, which pregnancy this was, whether or not a c-section had preceded this pregnancy, and the clinical course of pregnancy and fetal development were controlled for,” she says.
“But then what explains the difference?” asked an audience member.
“Well, that’s the fascinating part,” said Dr. Andreas excitedly. “The difference must lie not in the patient characteristics or clinical indications for the sections, but somewhere else—perhaps in patient preferences, perhaps in physician training and choices.”
“It sounds like you’re questioning our judgment,” a physician from the first town interjected. “Why would we willingly expose our patients to a more invasive and risky treatment? There must be a difference between the two patient populations.”
“Data analysis finds no significant correlation between the incidence of c-section and any clinical or demogr—”
“Well then your analysis must be wrong,” someone interrupted.
Dr. Andreas was convinced this data had value for clinical decision making, not to mention for health care costs and policy, but she was not sure of the most effective way to present it to the physicians whom it affected.
We expect medical professionals to be empirically minded—their views shaped by evidence, not opinion. Clinical researchers are even supposed to be able to approach experimental work with what ethicists call “equipoise”: genuinely testing a “null hypothesis” without any bias.
But people rarely are totally open-minded or free of prior certainties. And being the bearer of bad tidings is never really smooth sailing. People tend to be critical of results they don’t want to believe—while glossing over the most blatant lack of rigor in studies that confirm their biases.
As Dr. Andreas found out, clinicians aren’t an exception to the tendency to see negative results as implied criticism. It’s best to go into any research or evaluative exercise with an eye to the worst-case scenario. Better to be overprepared and not need the precautions than to be ambushed as Dr. Andreas was.
Especially if you work in a controversial area, try to make sure you have colleagues the community trusts involved well before the end of the process. They will be invaluable if the going gets rough.
Walking into a specialty society presentation alone with soon-to-be-published unwelcome findings is like walking into a lion’s den. If you are in that situation, then you need to have prepared your talk and any materials you bring well. If it’s going to be published soon, those who are implicated in your findings may well feel betrayed and cornered. It’s better if they feel like their concerns can still have an influence.
Go carefully through the methodology, making sure those in the audience know the things you have in common with them, and try to put yourself in their shoes. They have a lot at stake—not as much as the patients in their care, but a lot. You may be excited about your findings but, if it’s bad news to your listeners, speaking excitedly about your data isn’t going to make them feel as though their reputations are in safe hands. Demonstrate your concern by picking your words carefully with their sensitivities in mind.
I’ve upset a lot of people with the results of some of my research. And I suspect that, even if you’ve done everything right, there is still going to be serious rough and tumble. It can take us time to understand and come to terms with our own unexpected findings, and those whom our findings affect more directly will certainly need time for that. Understanding that, and exercising as much patience as you can muster, can help.
Convincing everyone isn’t generally a realistic goal when presenting findings others may not be happy to hear. Achievable goals for this kind of encounter may be to ensure some people really grasp the research, to gain at least one influential ally, and to keep communication channels open.