Updated May 8th, 2019.
Quantitative research is inherently risky. Though it’s true that numbers don’t lie, the people who gather, analyze, and interpret them are quite capable of fudging the facts every now and then. From design to execution, there are plenty of chances for human error to affect a quantitative study. And while we’ve already warned you about the perils of conscious and unconscious bias and finding balance in question scope, there are a few other not-so-obvious pitfalls that are easy to fall prey to if you’re not careful.
1. Not Asking Enough People—Or the Right Ones
In order to arrive at statistically significant insights, it is crucial to recruit an adequate sample size for your quantitative survey. The larger the sample, the less prone to flukes and more representative of the greater population your study will be, giving you greater confidence in your findings. Additionally, ensuring that you have already identified your target audience and pursued them accordingly is essential to getting the insights you need from the people you want. For example, don’t send out a survey to your entire customer base if you’re only interested in the feedback of young dads who recently bought your latest blender.
2. Automatically Treating Your Results Like Verified Statistics
It can be very tempting to assume that because your quantitative study will be measured in numbers and percentages that it is automatically rigorous and infallible. But a lot needs to happen before the data you’ve collected can be validated as statistically meaningful. Until you’ve ensured that your research design, sampling scheme, and data cleaning methods are meticulous and appropriate, you should treat your results like the essentially meaningless numbers they still are. Keeping this in mind will not only encourage necessary skepticism throughout the research process, but also remind researchers to interpret conclusions in context.
3. Designing Your Quantitative Study as an Impersonal Interrogation
Customers don’t want to be treated as faceless data points: which means they don’t want to answer questions that don’t correspond to their interests, desires, behaviors, or habits. Consumers are more eager to participate if they feel they will be heard; and the more they’re asked by random services to complete quantitative surveys just because they bought something, the less it feels they are making a difference. Be sure to incorporate hints of the story you’re trying to tell into your study to help bring it to life and give meaning to your respondents’ participation. One great way is to personalize surveys with information or incentives specific to the customer and their user history. Such techniques also make the project feel like less of an interrogation and more of a conversation.
4. Asking Open-Ended Questions
Or, in other words, you opted for a quantitative study when you probably should have conducted qualitative research. These two market research methods are meant to answer different kinds of questions, so it’s important to know from the get-go what your study is meant to accomplish. Quantitative research is best for answering questions that have more definitive answers—such as “how many,” “how often,” “when,” and “what”—so asking consumers to elaborate on what are meant to be succinct answers can throw your audience for a loop. On the other hand, qualitative research digs into the whys and hows of consumer behavior, allowing for more expressive responses and the chance to probe participants for more details. If your objective is more akin to a quick inquiry than a full-scale investigation, then quant is probably the way to go.
5. Making It Way. Too. Long.
Ain’t nobody got time for that, especially if it’s a survey that’s been sprung on a customer via email or pop-up. The longer a survey is, the more likely respondents are to lose interest or run out of time, and your response rate will quickly suffer. Your content may also begin to sacrifice depth for breadth if you cram too many questions into one study, and other aspects of online market research could contribute to the problem: like excessive intros, too many buttons, or one long page that feels like it takes ages to scroll through. To lighten up a quantitative survey, evaluate each question individually to ensure it advances the study’s objectives, or even explore methods of interactivity that could help enliven the session. And of course, always be honest about completion times!
Types of Quantitative Research Design
There are several ways to design quantitative studies, depending on the methodology and research objectives. Let’s focus on a few ways to design quantitative surveys for concept testing, which is one of the most common use cases we see among our clients. Within concept testing, there are three common types of research design that can be used to gather feedback from your target audience.
This type of design, where respondents only see one concept, allows for the cleanest read by most closely simulating a real life situation (for example, consumers don’t consider versions of the same product; they consider one version against competitive products already in market). This completely eliminates the potential for order bias and minimizes respondent fatigue. However, because monadic studies require a larger sample size, they can take longer to field and can often be more expensive.
Sequential monadic design
A sequential monadic design is often more affordable, as respondents are evaluating multiple pieces of stimuli. This type of design is also better for testing a group of stimuli that are distinct from each other. The sample size here can be smaller overall and when you use a randomized order, it minimizes risk for order bias. However, if you’re testing stimuli that are similar, respondents can have a hard time recognizing differences.
Similar to sequential monadic designs, grid designs are used when you have multiple concepts, ideas, or statements to test at once. It is usually recommended to reserve grid designs to test short pieces of copy like names, flavors, varieties, or scents, for example. Keep in mind that grid questions can be taxing for survey respondents, which can lead to fatigue.
While there’s always more you can do to help guarantee that your quantitative research design and results are valid, watching out for these five common mistakes will make your work that much easier. For an example of how quantitative agile research can help measure consumer behavior and gauge the potential for related concepts, check out the report below on craft beer drinkers’ attitudes and usage.