Like it? Share it!

Sign up for news and updates!






Enter word seen below
Visually impaired? Click here to have an audio challenge played.  You will then need to enter the code that is spelled out.
Change image

CAPTCHA image
Please leave this field empty

Login Form



A Study Showed.... PDF Print E-mail
Swift
Written by Dr. Steven Novella   

Scientific skepticism is, perhaps above all else, a process for looking at claims and evidence. Skeptics value that process above any particular conclusion that it may come to. Conclusions are tentative and need to be updated and almost perpetually refined. It is therefore folly to invest one's ego in any particular static conclusion, although that is the common "default mode" of human behavior. By investing in the process, however, we are free to alter our tentative conclusions as new evidence or arguments come in.

Saying that we value the process, however, is only the beginning. Understanding how to evaluate complex bodies of evidence is a lifelong endeavor in and of itself. That is why I don't understand those who criticize skeptical writing and lectures as "preaching to the choir," as if once someone self-identifies as a skeptic the battle is over. Rather, we need to continuously educate ourselves and each other about the findings of science as well as the many complex ways in which to think about claims and evidence.

With that in mind, I would like to address a habit of argument that is unfortunately common, even among skeptics - referencing a single study as support for a position, or as if the conclusion of the study can then be taken as an established premise.

Single studies are, of course, important to the process of science. They are the units of which the scientific literature - our body of scientific knowledge - is comprised. It is therefore very important to understand how to dissect a scientific study, or at least to recognize the major potential weaknesses of studies and have some idea if a study is reliable or nonsense. We can also rely partly on experts in each specialized field to examine and critique studies for us, realizing that a definitive analysis may depend upon rarefied technical knowledge.

Understanding that individual studies fall on a spectrum from rigorous to utter crap is the first step. Many people appear to not understand this, and act as if "a study shows" X then we can conclude that X is true. The media generally acts this way when reporting science news, greatly reinforcing this fallacy.

Just as important, however, is the recognition that no single study can reliably confirm any phenomenon. The more complex the phenomenon, then the more this is true. There are many reasons for this.

The outcome of single studies may be the result of experimenter bias, which can vary from subtle to borderline fraud. Or the results may be the result of a systematic and unrecognized error in the observations or experimental methods. The results may also be quirky - just due to chance.

But even when the results of a study are accurate, the interpretation of the study may be complex. The study, in essence, represents a single line of evidence. In order to reach a reliable conclusion about how the world operates it is better to have multiple independent lines of evidence converging on one conclusion. That, of course, requires multiple studies.

Studies gain power when they are independently replicated. This helps average out or eliminate the effects of random chance or individual bias. Exact replications endeavor to copy the original study exactly. This is important because then it helps to eliminate certain effects of experimenter bias. Researchers may subconsciously mine data or cherry pick in order to come up with a series of data that appears positive (exploiting so-called researcher degrees of freedom) An exact replication with a fresh data set will eliminate the effects of data mining or cherry picking.

There are also replications that are not exact but look at the same question with some differences in the methods. These types of replications are important also, because then we can see which variables have an effect on the outcome.

Finally, there are studies which are not replications but which address the same or a closely related phenomenon. There are often multiple different ways to ask the same question or to look at data, each with it's own strengths and weaknesses. If every different kind of study all appear to reflect the same phenomenon, then we begin grow confident that the phenomenon is real and works the way we think it does.  

Let's take, for example, the question of whether or not cell phones increase the risk for certain types of brain cancer. We can address this question in many ways. We can take groups of people who use and do not use cell phones and then follow them over time to see how many of each group develop brain cancer. We can look at people who have and do not have brain cancer and then ask them about their past cell phone use. We can see if the typical side of use (left or right) correlates with the side of cancer, and we can see if duration and intensity of cell phone use correlates with cancer risk. We can also do basic science studies to see what the biological effects of cell phone radiation are on living tissue.  

Each type of evidence has its strengths and weaknesses, and no individual study is likely to give us a definitive answer. We need to see what the overall pattern of results are in many different types of studies.  

The more extraordinary a claim then the greater the need for multiple studies before that claim should be taken seriously. There are many "one-off" studies that purport to show that water has memory, that prayer can heal, or that acupuncture points are real. This allows proponents of these notions to cherry pick these individual studies as if they are sufficient to conclude the phenomenon is real. In each case, however, if you look at the totality of research we see a pattern of results consistent with none of these phenomena being real.

In fact, some researchers have concluded from looking at published studies that most published studies are actually wrong in their conclusions. This is partly because most published studies are exploratory or preliminary, rather than rigorous or confirmatory. Preliminary studies tend to be biased toward false positive, and there is also a publication bias towards positive studies. Eventually, however, the research sorts itself out and a consensus of more rigorous studies will emerge. Since most new ideas or hypotheses are not likely to pan out, it makes sense that the bias toward positive preliminary studies will often not be confirmed by later better studies.  

The lesson to all of this is that when asking what the scientific research says about any particular question, we should not look to individual studies to answer the question, but to the overall pattern of results in the literature. Any individual study must be put into the context of this overall research.  

This, of course, is not easy. Next week I will further explore how to analyze the research literature as a whole.  

Steven Novella, M.D. is the JREF's Senior Fellow and Director of the JREF’s Science-Based Medicine project.

Dr. Novella is an academic clinical neurologist at Yale University School of Medicine. He is the president and co-founder of the New England Skeptical Society and the host and producer of the popular weekly science show, The Skeptics’ Guide to the Universe. He also authors the NeuroLogica Blog.

Trackback(0)
Comments (6)Add Comment
...
written by CNS100, April 14, 2012
Excellent post. While you're at it, I'd like to see some attention given to how one can judge meta-studies. This seems particularly important when we're looking at fields that aren't our own.
report abuse
vote down
vote up
Votes: +4
...
written by vanadamme, April 14, 2012
referencing a single study as support for a position, or as if the conclusion of the study can then be taken as an established premise


That's the entire premise of most cracked.com articles smilies/smiley.gif
report abuse
vote down
vote up
Votes: +3
...
written by QTone, April 15, 2012
This is a good article but I have one small point to add. It is often (correctly) said that extraordinary claims require extraordinary proof. But this does not mean that "obvious" claims require no proof since intuition is not always right. Does intuition prepare us for the traffic jam on a motorway when there is nothing at the front of the jam? Queuing theory predicts this effect but people have a hard time understanding this until they have experienced the phenomenon a few times. I have also heard " ... X million people cannot be wrong". A few tens of millions voted for Hitler. So the message has to be one of not taking things at face value and always being prepared to question, even if it looks obvious.
report abuse
vote down
vote up
Votes: +3
...
written by Baloney, April 16, 2012
Great article! This should be required reading for journalists. smilies/grin.gif
report abuse
vote down
vote up
Votes: +0
global warming?
written by laursaurus, April 18, 2012
Why are some areas of science that are exempt from the rigorous standards?

Just by daring to ask this question, I will be vigorously rebuked with down votes.

report abuse
vote down
vote up
Votes: +2
...
written by philvich, April 23, 2012
laursaurus,
certain areas of science earn exemption from rigorous scrutiny through bribery. if most "scientists" are on the payroll of politicians that want to push a point of view, then exemption is earned. it is akin to science by voting.

i always enjoy sitting in meetings where technical issues are being debated and i mockingly will put it to a vote. i am always amused to see who shoots their hand up in the air, endorsing the science by voting method, rather than science by facts and experimental results. i like to put my faith in people who dont put their hand up but rather say that even if the vote is 100-1, that does not trump facts and experimental results.
report abuse
vote down
vote up
Votes: +1

Write comment
This content has been locked. You can no longer post any comment.
You must be logged in to post a comment. Please register if you do not have an account yet.

busy