The Amazing Meeting 2014

Like it? Share it!

Sign up for news and updates!






Enter word seen below
Visually impaired? Click here to have an audio challenge played.  You will then need to enter the code that is spelled out.
Change image

CAPTCHA image
Please leave this field empty

Login Form



Are Surveys Scientific? PDF Print E-mail
Swift
Written by Kyle Hill   
It depends. Many people have a narrow view of what a survey can be. We can at least get this out in the open: most online polls are worthless (save for the ones run by universities and legitimate polling organizations like Gallup). There is no control over sample size, the questions are poorly worded, the motivation for answering is impossible to know, there is no way to judge how representative the respondents are of the general population, etc. Though it may be fun to jump onto the latest news website poll that asks whether or not alternative medicine should be taught in universities, for example, the results are meaningless.  

What I think many people fail to realize is that surveys can indeed be very scientific instruments. To throw away surveys because they happen to fit into the same cognitive framework as do online polls is to throw away much of social science. Like any science, conducting a valid survey is tremendously complex and the results hard to interpret. But, when done correctly, surveys can tell us a lot about how we think and behave.  

Making a Survey Scientific  

Before I got into social science research, I had just finished a degree in engineering, so the methods of social science too struck me as perhaps a bit “soft” (as contrasted with “hard” sciences like physics). But as I began to learn about how to create measures that would adequately gauge a person’s beliefs and cognitions, I realized that I was operating under a misconception.  

First of all, there are so many things that can go wrong with a survey that it has to be meticulously crafted in order to be of any use. The researcher is up against our pattern-seeking and amazingly biased brains, making accurate measurement trickier than say, measuring the acceleration of a free-falling object. For example, when asking sensitive questions people will tend to answer in the way that they think the researcher wants them to answer. When asking behavioral questions, people will give the response that they think represents the average, and not necessarily their own. Just the effect of being surveyed has a chance to bias results.  

But it gets worse. Not only will people bias their own responses, but also the physical (or digital) survey can skew data. The order of questions, the wording of questions, the font, type, and bolding of questions, the visual arrangement of the survey pages or screens, the amount of visual information on a page, the appearance of pictures, the appearance of sponsor logos, the numbering of answers for each question, the type of check-boxes that are used for responses, the number of total questions, all of this affects how the survey is completed. Beyond this even more troubles arise. Was there an incentive to complete the survey? How many times did the researcher ask you to complete the survey? Were there trick questions built into the survey so that the researcher could check if a respondent was just circling “B” all the way down? To conduct a proper survey, all of this has to be accounted and controlled for.

Even the analysis goes far beyond what I think most people realize. Surveys and their questions must be checked for both internal validity (do the questions measure what they are supposed to? Have other variables that could skew the results been controlled for?) and external validity (to what extent are the survey results generalizable?). Statistical tests are run to see if questions that are supposed to measure the same aspect in fact correlate with each other. All correlations are checked for statistical significance, and sample sizes are chosen so that these significant relationships will actually mean something.  

I can see where some people get tripped up. It is one thing to ask on a survey how old someone is or what is their biological sex. These questions are not really part of the debate. The questions that ask about more cognitive topics like “How uncertain do you think other people are about climate change?” are the hard ones to wrangle into the scientific method. And I don’t think the problem with scientific surveys is the surveys themselves (when done correctly), but with the incredible complexity of the human brain. We are still working out ways to get around bias in our own heads, so of course a survey asking a person’s thoughts on an issue will encounter similar obstacles.  

When done rigorously and scientifically, surveys can be very beneficial. Until our technology advances to the point where fMRI’s can “read” our thoughts (maybe), they are the best measurement of attitude and opinion that we have. And surveying has already given us so much. Advertisers and marketing agencies have become ubiquitous thanks to psychological and social scientific research that was born out of surveys. It is hard to argue that research on brand loyalty (based on psychological concepts and measured by surveys) elucidates nothing while you pay out the nose for that new iPhone that you surely don’t need. To say that surveys cannot really measure human cognition would be to deny many of the advances the skeptical movement takes for granted. Where do you think our knowledge of the confirmation bias or the backfire effect comes from?  

So let’s not throw the scientific baby out with the online poll bath water. As an engineer now in the social sciences, I can assure you that survey construction is not a fly-by-the-seat-of-your-pants kind of thing; it is a people-dedicating-their-entire-lives-to-just-testing-the-reliability-of-certain-questions kind of thing. Yes, some are terrible and yes, many can ask such loaded questions that answers become worthless. But when all of the precautions are taken, the variables controlled for, and the scientific method followed, we get a wealth of information about the human mind.  

 

Kyle Hill is the JREF research fellow specializing in communication research and human information processing. He writes daily at the Science-Based Life blog, contributes to Scientific American and Nature Education, and you can follow him on Twitter here.

Trackback(0)
Comments (2)Add Comment
I'm to blame . . .
written by garman, October 04, 2012
Years ago I was taking a preference survey regarding auto design. After clicking boxes at a computer for an hour and a half, I got tired. I know it's wrong but I started liking every design. Ergo, the Pontiac Aztek. My fault. Sorry.
report abuse
vote down
vote up
Votes: +6
The whole "surveys are biased because they oversample group X"...
written by Skeptic, October 05, 2012
...Miss the point.

The idea behind election survey is to predict the election results by noting who will be in the polls. One cannot simply sample people randomly or else one will sample minors, non-citizens, etc.. So one MUST over-sample some groups and under-sample others.

The question is which groups are over-sampled in what way. It seems that most polls over-sample based on the latest available relevant data: the 2008 federal elections. Yes, there were more recent elections -- i.e., for Congress in 2010 -- but voting patterns for congress differ from those of the federal elections.

Can these polls be skewed and nto represent reality? OF CORSE they can, if there is a change from 2008 to 2012. Polls can be wrong. But serious pollsters cannot invent the polls they *guess* represent the *unknown* will of the electorate right now. They need to poll by factual data they have.
report abuse
vote down
vote up
Votes: +0

Write comment
This content has been locked. You can no longer post any comment.
You must be logged in to post a comment. Please register if you do not have an account yet.

busy