Is Science Broken?

June 16, 2015

By Sarah Boon, Ph.D.

Given the headlines lately, you’d be forgiven for thinking that the public doesn’t trust scientists, and that science ranks no higher than opinion in understanding the world. 

Journal article retractions are receiving widespread coverage, for example a recent paper in Science that claimed people’s views on same-sex marriage could be changed after just 20 minutes of talking to a gay person (the same researcher apparently also falsified results in a study about media and ideology). In 2014, a high profile Japanese project published in Nature was retracted when its claim to be able to create stem cells couldn’t be replicated, and 2010 saw the retraction of a 1998 UK paper from The Lancet about MMR vaccines causing autism for research fraud and unethical methods. Concurrently, conflicting science advice is making people wonder about things as basic as their daily diet: should I drink wine every day? What about coffee? Is chocolate okay – or is it only dark chocolate?

A recent poll by the Pew Centre suggests public faith in science is declining, with a drop in the number of people who think science has made life easier – and in the number who think it has had a positive effect on the quality of health care, food, and the environment. This has led some to suggest that science is broken.

So what’s causing these problems? There appear to be two sides to the story: public perceptions of science and how it works, and scientists behaving badly.

One of the strongest arguments explaining public antipathy towards science is that much of the public isn’t entirely clear on how science works. 

To begin with, science is presented as inherently objective and value-free, when in fact it is highly value-laden. “Science is a human system,” writes Harvard professor Sheila Jasanoff in her book States of Knowledge (quoted in this Vox article). Michael Nelson, a Michigan State University professor, puts it more concretely when he says, “scientists choose their topics of study (and choose against other topics of study), frame questions in a certain manner (and not in some other manner), [and] accept funding from certain sources (and not from other sources – though there’s certainly more accepting than not accepting).” This means that scientists are subject to the same personal flaws as the rest of us, such as bias or rationalizing, and it also means that – yes, it’s true! – scientists can make mistakes. While this doesn’t condone outright research fraud, it does mean that well-meaning scientists will sometimes get things wrong. 

Secondly, as Leonard Mlodinow argues, the complexity and difficulty of science is not made entirely clear when it’s described it to the public. Popular culture often represents scientific findings as flashes of inspiration by individual scientists, when in reality it’s more likely the result of endless failed experiments that build on the foundational work of previous scientists. Thus the public may not realize that the studies they see reported in their local paper aren’t just “off the cuff”, as it were, but represent years of work and complex thinking.

Finally, as David Deutsch notes in an interview with Nautilus, the public is often unaware that science is a dynamic rather than static endeavour: it doesn’t “prove” anything, as ideas and theories are constantly evolving based on combinations of new and existing data. That study that concludes you should drink more red wine? The results could change when another researcher comes along and takes the research a bit further, or in a different direction. Scientific results become facts only as a critical mass of studies reach the same conclusion (an example being the efficacy of vaccines). It’s difficult to deal with this level of uncertainty, however, particularly when media hype suggests that even results from a single paper are highly certain.

Retractions and contradictory results between studies upend the public’s beliefs that science is value-free and relatively easy, and of the absolute certainty of scientific results. As a result, they begin to feel that science is not really valid or reliable.

However, as University of Alberta professor Tim Caulfield writes in his latest book, public trust in science has also been diminished by scientists themselves behaving badly. He notes that the relative importance of scientific breakthroughs is often exaggerated, and that research results often have errors due to distorted data or poor sample sizes. There’s also the issue of who funds scientific research – the public is highly sensitive to conflict of interest issues; for example, universities partnering with industry to sponsor research programs. 

In their recent Vox article, Julia Belluz and Steven Hoffman quote The Lancet editor Richard Horton who says that "Much of the scientific literature, perhaps half, may simply be untrue… Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness."

It appears that one of the big drivers of academic misconduct is the pressure on researchers to publish – and publish quickly – in high impact journals. Adam Marcus and Ivan Oransky write, “Scientists view high-profile journals as the pinnacle of success — and they’ll cut corners, or worse, for a shot at glory.” High impact publications mean more citations, which result in a higher research profile and likely greater funding success. And in a climate where research funding is becoming increasingly tight, the cycle is strongly self-perpetuating. But this approach can make research careless and sloppy.

Given these problems, both in terms of how science is perceived, and in how science is done, what can we do to improve the situation? We’ll explore some options in the next post.

More reading on the topic:
I Fooled Millions Into Thinking Chocolate Helps Weight Loss. Here's How. By John Bohannon in io9. May 27, 2015.
Academics Seek a Big Splash. By Noam Scheiber in The New York Times. June 1, 2015.

Thanks to Lisa Willemse for providing feedback on a draft of this post.

About Sarah

Sarah Boon
has straddled the worlds of freelance writing/editing and academic science for the past 15 years. She blogs at Watershed Moments about nature and nature writing, science communication, and women in science. She is a member of the Canadian Science Writers’ Association and the Editors’ Association of Canada, and was elected a Fellow of the Royal Canadian Geographical Society in 2013. Sarah is also the Editorial Manager at Science Borealis. Find Sarah on Twitter: @SnowHydro

Filed Under: Scholarly Publishing Science Communication Sarah Boon

Post a Comment