Viewpoint
Dec 2015

How Publish or Perish Promotes Inaccuracy in Science—and Journalism

Ivan Oransky, MD
AMA J Ethics. 2015;17(12):1172-1175. doi: 10.1001/journalofethics.2015.17.12.sect1-1512.

 

The brilliant website Kill or Cure? is a catalog of “the Daily Mail’s ongoing effort to classify every inanimate object into those that cause cancer and those that prevent it” [1]. Berries prevent cancer, biscuits cause it, and beer—well, beer causes it as well as prevents it, a conclusion that may drive some to drink. And those are just the kills or cures that start with “B.”

What the hilarious website is, of course, is a sendup of what has been referred to as “single study syndrome” [2], journalists’ penchant for overstating findings from medical studies. The weaknesses of this approach can range from applying conclusions from small populations to the world at large, to bestowing cause-effect status on observed correlations, to omitting the fact that a scientific “breakthrough” happened in mice, not humans. This all adds up to what Gary Schwitzer—founder of HealthNewsReview.org, which has rated the content of medical reporting for nearly a decade—has called “an unhealthy steady diet of news stories” [3]. And it happens in most news outlets, not just the Daily Mail.

That syndrome, however, is the natural sequela of academia’s “publish or perish” reward system, aided and abetted by journals’ use of embargoes to control the flow of scientific information. Researchers love to blame journalists for this mess, but journalists often turn the tables and blame scientists for being boosterish (or opaque). And there’s some evidence that medical journal press releases aren’t helping [4]. But to really understand how we arrived at the news environment we inhabit today, it’s necessary to look more deeply at how these problems began.

You’ve Been Ingelfingered

Newspapers have long covered science, but it wasn’t until the 1920s that scientific institutions began trying to nurture more interest in the subject by using embargoes. Journalists were given access to studies and announcements before they appeared in print, so that they could digest the material, report on it, and publish simultaneously on an agreed-upon date. The Journal of the American Medical Association (JAMA) may have been the first medical journal to embrace this embargo system in the middle of the last century, when then-editor Morris Fishbein, MD, let Associated Press reporter Howard Blakeslee read page proofs of upcoming issues at AMA headquarters. (For more on this history, see Vincent Kiernan’s Embargoed Science [5].)

As the media’s interest in science grew, alongside medical advances and the race to explore outer space in the 1960s, the editor of the New England Journal of Medicine (NEJM), Franz Ingelfinger, MD, became concerned that some researchers were sharing their data with the press before it had been peer-reviewed [6]. This was an understandable concern, since unringing the bell of a public frenzy for the latest professed cancer cure was nigh impossible.

So, in 1969, Ingelfinger wrote an editorial that contained what would eventually be called the Ingelfinger Rule. “The understanding is that material submitted to the Journal has not been offered to any book, journal or newspaper,” he wrote. “If an author willingly and actively has contributed the same material to any other publication—whether as text to a standard medical journal, or as a ‘letter to the editor,’ or as a feature in a lay magazine—that understanding has been disregarded” [7]. Although the rule has changed somewhat over time, the Ingelfinger Rule has been reaffirmed by editors at NEJM and other major medical journals [8].

An unintended consequence of the Ingelfinger Rule, however, has been to make some scientists afraid of speaking with reporters lest they risk losing the opportunity to publish in top journals [9]. Even journals’ attempts to clarify the Ingelfinger Rule—by explaining that typical scientific communication of unpublished findings (for example, at conferences) does not violate the rule, so long as researchers do not actively seek press attention [9]—do not completely mitigate this chilling effect. I’ve heard many stories about scientists presenting data in posters or talks at meetings—sometimes with immediate implications for public health or safety—only to beg reporters not to publish a story out of fear that no journals will accept their future submissions.

The fact is that publishing papers in peer-reviewed journals is about the only thing that matters to grant reviewers and tenure and promotion committees. And “publish or perish” is essentially true for journalists: when you have an editor breathing down your neck for the day’s—or in this day and age, the hour’s—story, you need to produce something quickly to earn those page views. It’d be helpful to pull together trends from meetings on emerging topics like new pandemics, or to pick the brains of researchers working in those areas to write a thoughtful, thoroughly reported piece that covers many aspects of a hot area, but too many scientists tend to clam up. So, instead, journalists wait for each study to be published and promoted in dozens of press releases, scan those that flood their email inboxes daily and find a nugget; then they might quickly write and publish something of suboptimal quality just to try to beat their competition. The concepts of “publish or perish,” “least publishable unit,” and “salami slicing” data are as real in journalism as they are in scientific publishing [10, 11].

In a nutshell, numerous incentives contribute to the dysfunctional medical science news reporting system we have today. We’re left with stories and television segments that strip scientific findings of their nuance and distort the public’s understanding of how science actually works.

Never Mind Ethics, Serve Your Readers

Of course, if ethical arguments don’t sway researchers, journal editors, public relations staffers, and reporters, perhaps evidence that readers and viewers may not want short, simplistic news reports will. In one recent study of how readers in Taiwan view news stories about contradictory health findings, a researcher concluded that “overrepresenting findings with dramatized characteristics has negative implications not only for the target news but also for the scientific community in general” like “loss of interest or trust in science” [12]. And the authors of another study, who found that frank discussions of uncertainty in stories about research didn’t undermine public trust in science, said that if their findings held up, they would “suggest that science communication should incorporate scientific uncertainties in media reports whenever it is required by the current state of research” [13].

In other words, there is no inanimate object that, purely speaking, definitely causes or prevents cancer, despite what we learn from the Daily Mail. (And don’t dismiss the Daily Mail as a caricature that no one reads; it has the largest audience of any English-language newspaper website in the world [14].) Sure, I can blame fellow journalists for rushing to print—or pixel. But just as most biological phenomena aren’t explained by a single factor, this is a nuanced problem. Scientists, publishers, and granting agencies need to take some responsibility, too, for creating incentives for researchers and their employers to exaggerate the significance of preliminary and isolated results. Fixing misleading journalism will, as the saying goes, take a village.

References

  1. Battley P. Kill or Cure? website. http://kill-or-cure.herokuapp.com/. Accessed October 2, 2015.

  2. Beresini E. Beware the single-study story. Outside. October 9, 2014. http://www.outsideonline.com/1926341/beware-single-study-story. Accessed October 2, 2015.

  3. Schwitzer G. Covering Medical Research: A Guide for Reporting on Studies. Columbia, MO: Center for Excellence in Health Care Journalism; 2010:43.

  4. Schwartz LM, Woloshin S, Andrews A, Stukel TA. Influence of medical journal press releases on the quality of associated newspaper coverage: retrospective cohort study. BMJ. 2012;344:d8164.

  5. Kiernan V. Embargoed Science. Chicago, IL: University of Illinois Press; 2006.

  6. Definition of “sole contribution.” N Engl J Med. 1969;281(12):676-677.

  7. Definition of “sole contribution,” 676.

  8. Angell M, Kassirer JP. The Ingelfinger rule revisited. N Engl J Med. 1991;325(19):1371-1373.
  9. Oransky I. Astronomy meeting reveals the real problem with the Ingelfinger Rule. Embargo Watch. October 12, 2011. https://embargowatch.wordpress.com/2011/10/12/astronomy-meeting-reveals-the-real-problem-with-the-ingelfinger-rule/. Accessed October 2, 2015.

  10. Campbell P. Nature embargo policy. Naturehttp://www.nature.com/nature/authors/policy/embargo.html. Accessed October 2, 2015.

  11. Owen WJ. In defense of the least publishable unit. Chronicle of Higher Education. February 9, 2004. http://chronicle.com/article/In-Defense-of-the-Least/44761/. Accessed October 2, 2015.

  12. Chang C. Motivated processing: how people perceive news covering novel or contradictory health research findings. Sci Commun. 2015;37(5):629.

  13. Retzbach A, Maier M. Communicating scientific uncertainty: media effects on public engagement with science. Communic Res. 2015;42(3):449.

  14. Sweney M. The Guardian overtakes New York Times in comScore traffic figures. Guardian. October 21, 2014. http://www.theguardian.com/media/2014/oct/21/the-guardian-overtakes-new-york-times-in-comscore-traffic-figures. Accessed October 2, 2015.

Citation

AMA J Ethics. 2015;17(12):1172-1175.

DOI

10.1001/journalofethics.2015.17.12.sect1-1512.

Acknowledgements

I would like to acknowledge Alison McCook’s helpful edits and the assistance of Alison Abritis in preparing this manuscript.

The viewpoints expressed in this article are those of the author(s) and do not necessarily reflect the views and policies of the AMA.