You have agreed to give up your time to complete a survey in return for some hoped for benefit in terms of a better customer experience and you reach the dreaded question that you can’t answer.
Consider the survey I just completed for my golf course. Now, the survey is supposed to be about my experience on that day rather than my experience overall. So, after I get part way through the survey I get asked to rate the quality of a number of things, such the condition of the course, the experience at check-in and many others.
I could answer these but when it came to the selection on the beverage cart I was stumped. I didn’t go anywhere near the beverage cart that day. Maybe the selection was good, maybe it was bad. I am stuck — there is no option for not applicable today. Do I just pick a response category at random; do I guess at how to rate it or use my rating from other like questions?
Did the person designing the survey not understand the customer experience they were measuring or did they just not care about wasting my time.
In the end I gave a neutral response (neither good nor bad) because I could not advance in the survey anymore without an answer to this and a couple other questions.
The best solution for the designer was to use skip patterns that would have directed me through the survey more appropriately (ask me first if the questions apply). Even if this was not possible (for example, because it was a DIY job and the person or software lacked the capability), having an option to allow me to check that the question was not applicable is an easy solution.
While there are some legitimate reasons for forcing people to give an answer when they might otherwise decline (e.g. getting impressions of a company), it never makes sense to get people to answer things that just dilute and misrepresent your data.
More pet peeves to come!