The Amazing Meeting 2014

Like it? Share it!

Sign up for news and updates!






Enter word seen below
Visually impaired? Click here to have an audio challenge played.  You will then need to enter the code that is spelled out.
Change image

CAPTCHA image
Please leave this field empty

Login Form



Is Science Self-Correcting? PDF Print E-mail
Swift
Written by Dr. Steve Novella   

In theory, yes. In practice, we can do better.

An article published in The Economist reviews what skeptics have been talking about for years. There is a lot of crappy research out there that is unreliable. This means that just because you can find some studies that appear to support your position, it does not mean your position is correct. You cannot know the answer to a question by cherry-picking the studies you want. You have to do a critical analysis of all the research.

The full article is worth a read, and regular readers of skeptical blogs will probably recognize many of the points and references, but will also likely learn some new details. Here is my own summary of the major areas of concern regarding the quality and reliability of published scientific research.

Most studies are of poor quality – Doing a large rigorous scientific study is difficult and requires a great deal of resources, including time and money. Researchers therefore conduct many preliminary or exploratory studies, which are small and have only basic controls. Such studies are unreliable, and are useful only in determining if further research is warranted. Preliminary studies, when positive, tend to be false positive (for reasons I will outline separately), and are much more reliable when negative.

Publish or perish – researchers and institutions are under high pressure to publish studies. This encourages produces a high volume of low quality studies, or in publishing the “least publishable unit” of ongoing research to maximize the number of papers derived from one’s research.

Researcher bias – Researchers are people who want their ideas to be correct, and studies may be conducted by industry or those with a vested interest in the outcome. Even within accepted methodology, researchers have a great deal of degrees of freedom, or wiggle room. This freedom can be exploited (consciously or subconsciously) to engineer a positive result, even out of completely negative data. Simmons et al demonstrated that a p-value of 0.05 can be created out of negative data 60% of the time just by exploiting researcher degrees of freedom. In surveys a third of researchers admitted to engaging in questionable methods that would exploit degrees of freedom to create positive results.

Publication bias – Journals, whether in print or online, have their own motivations for success. Subscription journals want to maximize their impact factor, and that means publishing exciting studies with new and surprising findings. Such articles, of course, are the very ones that are most likely to be false positives. Open-access journals, on the other hand, that charge researchers a fee are motivated to publish lots of studies, regardless of quality. A recent Science magazine article pointed out the pathetic quality control in this segment of the industry. In addition, researchers themselves are more likely to submit a paper that is positive rather than negative.

Lack of replication - Independent replication is the key to the self-correcting nature of science. The problem is, scientists do not do replications enough, and journals do not publish them enough. There is the now famous incident of Psychology Today publishing terrible research by Daryl Bem claiming that subjects could “feel the future.” Richard Wiseman et al did an exact replication of one of Bem’s studies, with negative results, and submitted it to Psychology Today. Their response? We don’t publish exact replications. Why not? Because they are not sexy enough to boost their impact factor.

A 2012 review of the last century of psychology research found that only about 1% of published studies were replications. This review, as far as I can tell, has never been replicated.

Mistakes – Researchers sometimes simply make errors, and reviewers sometimes do not pick them up. A 2011 study found that 50% of neuroscience papers reviewed committed a common statistical error – an error that would often turn negative results into positive results.

Fraud – While fraud garners the most headlines, this is actually probably a small contributor to the overall problem of false positives in published research. Still, it occurs, and further contaminates the literature.

The Good

It’s not all bad, and I do not want to paint an overly bleak picture. It is possible to look at all of the problems with scientific research and conclude that it is all hopelessly flawed. That would be nihilistic at best, and denialism at worst.

Rather, the point of highlighting all the potential problems with research is not to argue that it’s hopeless, just that it’s difficult. We can still get to a reliable conclusion in science by carefully reviewing all the research, weeding out the bad, relying mostly on the most rigorous research that has been adequately replicated.

In other words – all of the above informs us about where to set the threshold for acceptance as scientifically proven. Skeptics tend to have a much better idea of where this threshold is than believers, who tend to use a ridiculously low threshold, at least for their preferred belief.

At its best, scientific research functions well. Researchers will carefully replicate a finding before committing their lab to furthering the research. No one wants to waste their research resources on someone else’s false positive. Early conflicting research will be vigorously debated, until a consensus protocol is agreed upon, and then all sides will listen to the results. As a result, for many important questions we have multiple replicated rigorous studies with clear results.

Also it’s important to point out that we know about all of the above problems with scientific research because scientists are asking the hard meta questions about the process of science itself. So not only is science self-correcting, the mechanisms of self-correction themselves can self-correct.

Solutions

All of the problems outlined above have solutions. These solutions are not challenging or expensive, but they may be slow to be adopted because they require a culture change within science. Here are some suggestions:

Better education of researchers – most mistakes in science are just naïve and could be solved by better education. More formal and thorough education into research methodology and mistakes to avoid would help. In short, all scientists need to become better skeptics, and this is an important role for the skeptical community.

Quality control at journals – Journals need to do a better job of systematically flushing out errors and poor research quality. There are, of course, world-class science journals that do an overall excellent job of this, even though some bad studies slip through the cracks. The problem is, most journals are mediocre, and many are terrible. The journals themselves need to be better evaluated, and only those that meet a significantly high bar of quality should contribute to the official peer-reviewed published literature. We have to close the back door for bad research through bad journals.

Also, to enhance peer-review and editorial review, researchers should be required to submit all their raw data when they submit a paper for review.

Publish replications and negative studies – Journals need to set aside room for publishing negative studies and exact replications. It is selfish, in a way, for high-impact journals to take the cream of new exciting research off the top, and not do their fair share of publishing replications and negative studies. This creates a perverse incentive, where perhaps the most valuable research is neglected. For online journals, space is not an issue and therefore not a valid excuse. For print journals, sections should be created dedicated to such studies, and they can also publish online supplements with all the replications and negative studies they want, as long as they are high quality.

Register all studies – You cannot hide negative studies if you have to register them beforehand. This is already being enforced for human trials in some countries, but other areas of research may benefit from trial registration also.

Full Disclosure – This is already largely the case, but I will include it for completeness – researchers need to fully disclose potential conflicts of interest when submitting or presenting a paper.

The media – Science journalists and educators should educate the public about the messy nature of science, and what it takes to get to a reliable conclusion. Publishing stories about preliminary research with sensational headlines hurts the public perception of science.

Advocating for improvements in the institutions of science, and public education of the methods of science, is one of the core missions for the skeptical community. The data is there; we know the problems and the solutions. We just need to apply pressure to improve what is perhaps the most important human institution – science.

 

Steven Novella, M.D. is the JREF's Senior Fellow and Director of the JREF’s Science-Based Medicine project.

Trackback(0)
Comments (0)Add Comment

Write comment
This content has been locked. You can no longer post any comment.
You must be logged in to post a comment. Please register if you do not have an account yet.

busy