Quantitative research is inherently risky. Though it’s true that numbers don’t lie, the people who gather, analyze, and interpret them are quite capable of fudging the facts every now and then. From design to execution, there are plenty of chances for human error to affect a quantitative study. And while we’ve already warned you about the perils of conscious and unconscious bias and finding balance in question scope, there are a few other not-so-obvious pitfalls that are easy to fall prey to if you’re not careful.
1. Not Asking Enough People—Or the Right Ones
In order to arrive at statistically significant insights, it is crucial to recruit an adequate sample size for your quantitative studies. The larger the sample, the less prone to flukes and more representative of the greater population your study will be, giving you greater confidence in your findings. Additionally, ensuring that you have already identified your target audience and pursued them accordingly is essential to getting the insights you need from the people you want. For example, don’t send out a survey to your entire customer base if you’re only interested in the feedback of young dads who recently bought your latest blender.
2. Automatically Treating Your Results Like Verified Statistics
It can be very tempting to assume that because your study will be measured in numbers and percentages that it is automatically rigorous and infallible. But a lot needs to happen before the data you’ve collected can be validated as statistically meaningful. Until you’ve ensured that your research design, sampling scheme, and data cleaning methods are meticulous and appropriate, you should treat your results like the essentially meaningless numbers they still are. Keeping this in mind will not only encourage necessary skepticism throughout the research process, but also remind researchers to interpret conclusions in context.
3. Designing Your Study as an Impersonal Interrogation
Customers don’t want to be treated as faceless data points: which means they don’t want to answer questions that don’t correspond to their interests, desires, behaviors, or habits. Consumers are more eager to participate if they feel they will be heard; and the more they’re asked by random services to complete surveys just because they bought something, the less it feels they are making a difference. Be sure to incorporate hints of the story you’re trying to tell into your study to help bring it to life and give meaning to your respondents’ participation. One great way is to personalize surveys with information or incentives specific to the customer and their user history. Such techniques also make the project feel like less of an interrogation and more of a conversation.
4. Asking Open-Ended Questions
Or, in other words, you opted for a quantitative study when you probably should have conducted qualitative research. These two market research methods are meant to answer different kinds of questions, so it’s important to know from the get-go what your study is meant to accomplish. Quantitative research is best for answering questions that have more definitive answers—such as “how many,” “how often,” “when,” and “what”—so asking consumers to elaborate on what are meant to be succinct answers can throw your audience for a loop. On the other hand, qualitative research digs into the whys and hows of consumer behavior, allowing for more expressive responses and the chance to probe participants for more details. If your objective is more akin to a quick inquiry than a full-scale investigation, then quant is probably the way to go.
5. Making it Way. Too. Long.
Ain’t nobody got time for that, especially if it’s a survey that’s been sprung on a customer via email or pop-up. The longer a survey is, the more likely respondents are to lose interest or run out of time, and your response rate will quickly suffer. Your content may also begin to sacrifice depth for breadth if you cram too many questions into one study, and other aspects of online market research could contribute to the problem: like excessive intros, too many buttons, or one long page that feels like it takes ages to scroll through. To lighten up a survey, evaluate each question individually to ensure it advances the study’s objectives, or even explore methods of interactivity that could help enliven the session. And of course, always be honest about completion times!
While there’s always more you can do to help guarantee that your quantitative results are valid, watching out for these five common mistakes will make your work that much easier. For an example of how quantitative agile research can help measure consumer behavior and gauge the potential for related concepts, check out the report below on craft beer drinkers’ attitudes and usage.