Monday, July 14, 2014

Do we need better health literacy, or better health journalism?

"Scientists have discovered a new miracle drug for breast cancer!"
"A new study suggests that chocolate is good for your heart!"
"Active video games offer promise in the fight against obesity!"

These are just a few of examples of news headlines that could be brought to you by your science/health journalism professionals. While I've just made these up on the spot, you've probably read or heard headlines similar to this (or exactly like this) before.

Today's post is brought to you in response to a terrific article by Adriana Barton in the Globe and Mail. In Health literacy 101: The science of how to read the science, Ms. Barton provides some useful information for those of us (which is most of us!) who are consumers of health-related media. In our information-age, tech-savvy world it is easier than ever to access information on the latest scientific studies. Journalists have taken advantage of this new-age of reporting and publish secondary articles on scientific reports daily, even hourly. The valid, and honest, interpretation of this scientific information varies tremendously. Enter the article mentioned above on health literacy, which in my view is really science literacy, something that we are struggling to uphold as the number of students eligible for, or obtaining, STEM degrees is low in Canada compared to our peer countries (a topic for another day...).

While I applaud Ms. Barton's efforts to provide some support for citizens consuming news about scientific studies, I think the article was brief to a fault. Coming from someone in the sciences, rather than a journalist, here are several other ways that you can improve your "health literacy".

1) Be skeptical: Ms. Barton noted that many media articles on scientific articles did not disclose any conflicts of interest. While a perceived conflict of interest does not imply any conflict of interest at all, it is a crucial piece of information that provides the reader with an additional lens to critique the article. Likewise, just because a study is funded by industry, that doesn't necessarily mean the article is garbage. There are plenty of scientific articles with no industry funding that are garbage too!

2) Read carefully.  Try to tease apart the numbers when journalists don't provide any. Often this can be a red flag that something is up, or that the writer is bending some findings to suit their purpose. In the article above, Ms. Barton quotes a JAMA Internal Medicine study, and writes: "Of 1,889 media reports on health research published from 2006 to 2013, Schwitzer and a team of 38 physicians and science writers found that half relied on a single source for the article, or failed to disclose the conflict of interest of sources." Now, read that again. Half of the media reports relied on a single source OR failed to disclose conflicts of interest. These are two different faux-pas with different meanings, and the author doesn't say how many of the reports fall into which category. This could mean than 49% used a single source while 1% failed to disclose, or it could be evenly split, or most articles committed both writing-offenses, while only a handful did either. It turns out that the original article doesn't differentiate between these two in their criteria, so this isn't Ms. Burton's fault at all, but this is a good example of how the numbers can be buried to make a point one way or another.

3) Greed is universal. Ms. Burton sheds light on an important aspect of publishing in the medical and scientific world. Publishing is academic currency - simply put, it's an efficient way for researchers to advance their careers. As with any field, greed can lead to people doing some strange things. In many ways, a lot of research is upheld to their researchers' own ethics...interpret this how you will, but this means is that there will always be variability in the integrity of published work. Most of the time the published work is probably good, but sometimes it's crap. What Ms. Burton didn't mention is that greed is universal. Journalists also live in a world governed by the publish or perish mantra and must publish their work to advance in their career. For those who have earned their way to full-time reporting positions, they are often required to publish every week, or even every day! There are countless examples of journalists twisting the words of researchers or mis-representing the findings of scientific articles- see this PLoSOne article on ADHD in the media (and the corresponding news article that goes with it). There's even a prize for being the best at it: the Orwellian Prize for Journalistic Misrepresentation. Now to be fair, it is a two-way street - the quality of a press release is related to the quality of a news report - it's just that on one side we have articles that are peer reviewed for their scientific rigor, and on other side we have articles that are peer reviewed for their "wow" factor, or their ability to enhance/maintain readership. I'll leave it up to you to decide which side to trust more.

4) Beware the percent! Simply put, try to read beyond articles which state findings in relative terms only. Ms. Burton did a great job above by telling us how many media reports there were in the study - kudos! As she alludes to in her article (credit where credit is due, I always say), a 50% reduction in risk means something very different for a risk of 1 out of 100 000 than it does for 1 out of 10. Be weary of articles that report only the relative terms, and in my view, especially those that convert absolute terms to relative terms.

Truth be told, I don't believe that all of the media spin put on scientific articles is accidental. I've seen absolute terms in a scientific article be translated into relative terms in a media article just to increase the "wow" factor. Errors in reporting may be due to external factors, such as incorrect findings in the first place (I've noticed a lot of John Ionannidis citations in researching this post) or selection bias in the types of articles that are passed along (such as those this elicit an emotional response). But in a field where 'spin' is not only acknowledged but actively used as a tactic, I just don't buy the picture of the accidentally misinformed reporter. This article in the CBC does a lot of externalizing in discussing inaccuracies in the media. Of course, there is blame all around - for everyone - although we can argue about who holds more of it.

For the consumers of health media I wish you good luck in sifting through the good and the bad. Science isn't easy to understand, and science writing is hard enough to do, let alone read. The media offers an efficient medium to translate this information for you, but it is increasingly important to be critical of what you read in the age of post-print media.

My advice to those writing in health media is to describe the study in one easy to read sentence up front, containing all of the details needed to make sense of the finding. Use spin carefully - it is a powerful tool, and with great power comes great responsibility. Use absolute terms and provide numbers where possible. Do your homework and understand what these numbers mean - this is often troubling in media articles that present findings as likelihoods when the article uses odds ratios. And please, PLEASE, provide the link to the original article!