Archive for January, 2014

All the science that’s fit to print…


Between August 10, 1978 and November 5, 1978 a multi-union strike shut down the three major New York City newspapers — one of which was the New York Times. This blip in publishing history serves as an important data point for how the media plays an important role in science literacy and science communication.

For those few months, no editions of The New York Times were printed — outside of a parody rag “Not the New York Times” — a prank alternative that was handed out in big cities around the country full of news stories imagined by comedy’s liberal elite of the time.

Internally, the New York Times continued to prepare an “edition of record” that was not distributed and showed all the news stories that would have been fit to print during the strike. The newspaper kept a list of articles they intended to cover. And when you take a look at that list in the light of hindsight, what you see is how print media effects citations of scientific articles. An effect we don’t often hear about, and one we assume to work in the other direction.

New England Journal of Medicine articles covered by the New York Times received 72.8% more citations than articles that were not covered (one year after publication). This effect was not present for articles that the New York Times intended to cover (and couldn’t because of the strike).

It seems that media coverage encouraged and helped articles garner future citations. Something that can’t be fully attributed to the fact that, simply, the New York Times chose to cover more influential articles.

Today, with science communication heavily dependent on the press release, the question has to be asked as to how much does science reporting ultimately skews the playground — cementing ‘not-so-good-science’ not only in the eyes of the public but also in terms of the impact factors and citations of ‘not-so-good-science’.

My use the term ‘not-so-good-science’ is deliberate hyperbole. But recent research has shown that newspapers are more likely to cover observational studies and less likely to cover randomized trials. And when the media does cover observational studies, they select articles of inferior quality.

And in case you didn’t know

“The randomised controlled trial (RCT)  is one of the greatest inventions of modern science — a tool that allows you, more reliably than any other, to compare two or more interventions and determine which is more effective for a given purpose.”

The research covers 75 clinically-oriented journal articles that received coverage in the top five newspapers (by circulation) and compares them against 75 clinically-oriented journal articles that appeared in the top five medical journals (by impact factor) over a similar timespan.

The investigations receiving coverage from newspapers were less likely to be randomized controlled trials and more likely to be observational studies. The observational studies from the media frequently used smaller sample sizes and were more likely to be cross-sectional.

The crux is where weak reporting, or rather, reporting on weaker science, comes at the expense of the complex and throws out the nuance in favour of simplicity. The age-old debate of “dumbing down.” Science is hard in every sense of the word. The dazzling myriad of complexity in breaking everything down to its basic components and putting it back together to look at the grand scheme of all things cannot really be fully communicated to a lay audience.

Perhaps the better question would be how much of the science that reaches print and online media is an accurate reflection of science in its entirety?

Image — source


What had I twaught…

%d bloggers like this: