It is probably not surprising that one of the conclusions of the Opt-in Online Panel Task Force created by AAPOR pertained to quality. As the panel concluded, “researchers should choose the panels they use carefully” (get report here) because not all panels are equal. The question for clients and researchers who are trying to choose a panel is obviously how and perhaps even why?
The why is easier to answer. We now have a number of compelling studies that show that different panels can get you different results. These include among others the Gittelman and Trimarchi, 2009 CASRO paper that compared a number of U.S. panels and the recent MRIA study presented at Net Gain 4 which compared 14 Canadian panels. Panel recruitment and composition (e.g. the % of professional respondents) vary enough across panels to undermine comparability.
The how is much more difficult to answer. ESOMAR helpfully provided a list of 26 questions to ask of about online samples to help researcher buyers. Some of the answers could potentially really help a potential client weed out a quality from non-quality panel but others are less clear.
We might all agree that if the panel is used for non-research purposes (question 4) it might be less useful than one that is used only for research. Answers to other questions do not give unambiguous advice. How does a client determine which rewards create the best panel (question 6) or the optimal number of online surveys a person is invited to (question 15)? Add to this is the fact that panel companies are inevitably going to frame their answers in terms of marketing spin. Questions without answers are not all that helpful.
Telephone surveys have been privileged because, until recently, they could reach most of the population and we could recognize quality of the resulting completions in terms of response rate and the representativeness of the demographic profile. Online research must find quality measures that give us the same level of confidence.
Non-probability-based surveys have always been controversial and the search for measures or indicators of quality reflect the need to root research in a common understanding or reference point that has so far been missing when it comes to online panel research.
Academics are having a field day tearing into the commercial side as they understand what would be best in an ideal world but are not forced to apply it in the commercial world (Trimarchi and Gittelman, 2010)
The AAPOR report represents an important contribution to the debate about online panel research and further buttresses the need for continued research-on-research about panels. Without, however, a workable, agreed upon metric for understanding panel quality, the quality question will not go away.