Imagine, you are in front of an audience presenting findings from the most recent of a series of polls. The number at the end of the series is lower than the previous point in the series. What do you say?
The temptation to offer a compelling story to explain the last data point in a series is almost overwhelming. It is in our nature to want to understand information in terms of a story.
The truth is that unless the change is fundamental – altering the public opinion landscape profoundly – you probably don’t know how important the decline (and the last point) is until the next survey. Because the next survey shows either that (a) the change is real and persistent; or (b) ephemeral.
Although we have a way to understand whether change is meaningful (statistical significance), there are a number of reasons to be cautious.
- First, depending on the sample size a difference between two data point can be significant even though the actual difference is substantively small. And, more importantly, all a significance test shows is that two points could not have come from the same population – it does not mean that the observed difference between two points is the real change (a 6 point decline might really only be a couple of points).
- Second, when considering a relationship over time, it is advisable to use a form of time-series analysis because, in simple terms, a time series has error that is correlated across time.
- Third, the less formed public opinion on an issue the more likely that we will find changes from survey-to-survey but identifying the reason for the change may be difficult.
So the danger, we face standing up and offering a thoughtful explanation for the last point is that our efforts can be proved inadequate when we get the next data point. Setting us up, unfortunately, to spin a new yarn about what has happened to produce this newest last point.
We can’t stop spinning stories but there is no question that we need to be careful and clearly identify a strategy to focus our explanations where they are most likely to be true.
The easiest ways to reduce your incentive to tell a story about change that is not real is to do the following:
- Look at the historical change and focus the change you currently observe within that context. Is this change smaller or larger than previous changes? Is there a long term trend (being careful not to cherry pick the data points?)
- Think about all of the changes in the survey in light of the question types. Depending on the survey, some of the questions, like values, you asked should not be changing much? So be more circumspect about changes in survey questions that you would not expect change and compare the changes in variables that should be changing with the ones that you expect not to change.