Sometimes polls seem to get it wrong. The election result turns out differently than we expected. The product fails at launch despite assurances that it is a winner. The heralded policy proposal is met with scepticism when released publicly.
Consider the story that Kim Dedeker told about research failure in 2006:
“It’s when we field a concept test that identifies a strong concept. Then our CMK manager recommends that the brand put resources behind it. The marketing, R&D and research teams all invest countless hours and make a huge investment in further developing the product and the copy.
Then later, closer to launch, we field concept and use tests and get disappointing results. And rather than finding an issue with the product, we find that the concept was no good. We realize that the data we’d been basing our decisions on, was flawed from the start.” Kim Dedeker, 2006
Certainly lots of products fail but what we don’t know is how much of the failures reflect flawed data. How do we separate out the role of the data (and the subsequent analysis provided to the client) from the biases and self-interests of clients, from the execution details of the product development, launch and marketing?
Nevertheless the idea that market research polls are becoming less accurate is gaining momentum. As O’Connel wrote in October,
“You’d never know from the way politicians and businesses casually base their everyday decisions on survey data that there’s big trouble brewing in the science of finding out what people think—and that the effects may start to show up as soon as this month, with the U.S. heading into critical midterm elections” Andrew O’Connel Reading the Public Mind
Incidents of actual or perceived failure to predict election outcomes linger in our collective memory (the British election of 1992; the 2004 Canadian election) and it are these high profile events which are most likely to offer the next piece of evidence (to others?) that maybe we are not as accurate.
When (not if) it happens next, the context of current discourse about polls is such that the industry will face greater scrutiny. How can we most effectively inoculate ourselves from these charges? Is there something we can do differently? Science is not enough, in part because the threats are valid scientifically (as we replace random samples with convenience once, for examples) and in part because science is never enough.
It is clearly time to come together to define what survey quality means and how it should be used. In the past the “margin of error” and perhaps the “response rate” was the resting point for our conception of quality but these pillars are weak and the definition of quality must move beyond these. Perhaps it is time to define data quality in terms of something looser like “integrity” and to start attaching a form of confidence in the conclusions we make based on what we know about human behaviour. A number is just a number and we have the tools, if we use them, to make sure people understand the meaning (and limitations) of that number.