If you ask us a question, many of us will give you an answer. Perhaps it is arrogance, perhaps it is just trying to be helpful. The answers, of course, are a mixed bag, especially when we don’t allow people the option of begging off on the question. The asker then must decide what to do with these answers which at the aggregate level reflect a combination of real opinion, top-of-mind thoughts, and uninformed guesses. Should we only count the informed opinions? And, perhaps from a methodological point of view, should we encourage more people to admit that they don’t have an opinion on this?

As a researcher a “don’t know” response is as interesting as any other opinion but as a market researchers, don’t knows are messy, difficult to deal with and hard to explain to clients.

A non-answer to a brand rating question may be easy to explain: these people don’t know enough about you to like you or not. A non-answer to most other questions is harder because clients want to understand the implications of the answers we found for the whole population not just the people who had an opinion. The people who have no opinions introduce uncertainty.

Throughout the telephone era the industry was able to, sort of, pretend that “not knowing” was not much of a problem. The live interviewer and the social bias introduced helped get answers. In addition, we could ask interviewers to accept a “don’t know” or “refuse to answer” only if volunteered (we didn’t tell respondents these were legitimate responses. We could then get almost everyone to answer the questions.

With online, we don’t have the option of hiding the “don’t know” response and must decide when and if to offer it. This means we now should have an explicit theory or understanding of what nonresponse means in the variety of survey contexts in which we add this option.

There are numerous examples of the problem. In 2008, I was involved in a survey for Justice and we asked parallel questions online and by telephone. When we looked back at the data, almost half of the online questions had don’t knows of 5% points or more but almost none of the telephone results had this level of don’t knows.

Differences between telephone and online are always hard to untangle but there is clearly a need to better understand how much people do or do not know when they are answering our questions. If they don’t know, we need better or different questions to help us.

How often do you give survey responses that are not well thought out? As a researcher how do you deal with the fact that many people are just giving answers? Do you encourage don’t know responses?