In an article in today’s Wall Street Journal Online, author Carl Bialik questions whether results from online survey sites, specifically data findings from paid online survey site e-Rewards, are really applicable to the public at large.
E-Rewards recruits survey panel members, sometimes through frequent flier programs, with the promise of rewards for completing surveys. E-Rewards’ currency can be redeemed for more consumer-oriented rewards, like free movie rentals from Blockbuster, or for rewards which presumably would have more appeal to business travellers, like airline miles and discounts at online luggage store eBags. When a new survey goes live, e-Rewards’ panel members are screened in order to, for instance, get responses from people from specific industries, people who work for large companies, self-employed, married women aged 26-46 in the computer industry, etc… You get the idea.
Anyway, Bialik says that the type of business person who would participate in a survey panel like e-Rewards does not necessarily hold opinions that could be applicable to Corporate America in general. “…What they’re really reporting are the tendencies of business travelers who identify themselves as executives, sign up for a rewards program and then respond to an emailed invitation to participate in an online survey.”
The article goes on to suggest that because blue collar industries tend to travel less and to be working out of the office, away from a computer, those types of industries would be under-represented. So, to survey business people online, then to try to draw conclusions like, “70% of business owners say they intend to…” might not really be accurate.
How accurate really IS data from paid online survey sites? I think the problem is far more pervasive than drawing too broad conclusions from under-represented population segments. The problem is, how many people have sort of…fudged answers to survey prequalifiers in the hopes that their answers would lead to their being asked to complete the full survey for a reward? (And then, once accepted, completed the survey with fake or at least misleading information?) How prevalent is this problem, and how badly does it skew the survey results?
Of course, if you’re hoping to get thoughtful, valid answers from busy businesspeople, or from any other niche of the population, you’re going to have to offer some type of incentive. But that incentive alone is cause enough for the type of intelligent people that these survey panels value to falsify answers in order to receive the reward.
One research company that uses data from e-Rewards’ surveys talked about how they “scrub” data in the hopes of eliminating bogus responses, like those from people who didn’t spend enough time on the question, or those, for example, who answered “Excellent” for all questions. And maybe that’s all the researchers can do, because after all, they’re not omniscient and can’t discern the truthfulness behind each individual response. But is that enough? Do survey findings really mean anything?
Whether the survey results are misleading because of the wording of their conclusions, or because of the proportion of fabricated responses, it’s best to take all of these survey findings with a grain of salt. Perhaps Mark Twain said it best: “There are lies, damned lies, and statistics.”