As the adoption and practice of agile market research continues to demonstrate its value and become the new industry norm, iteration has become the name of the game. Iterative product development is often summed up by the common mantra of “test, iterate, repeat.” But as succinct as this summary is, we feel it might be a tad misleading in its portrayal of iterative research. While testing your concepts, claims, or whatever you’re trying to improve is technically conducted through market research, it doesn’t end when the study is over. Applying the resulting consumer insights to your next iteration is a part of the research process, an analytical action that is more than just, say, tweaking the respondent’s favorite tagline. The value of iteration lies not in choosing between winners of your study and the losers; it lies in the insights and how those learnings shape your future business.
The Point of Testing Isn’t to Iterate—It’s to Learn
Researchers and marketers alike feel immense pressure to get their product vetted for success and onto shelves as quickly as possible. And that need for speed is when the value of iteration becomes most apparent. The purpose of iterative research is to fail faster and thereby fuel rapid decision-making to hustle product development. But evidenced by the more than 70% of Consumer Packaged Goods (CPG) products that fail within their first year, we’re not learning enough from our research. When brands become too focused on reaching the next round of research so they can take their concept to the next level, they risk simplifying this testing stage to a reductive process that doesn’t actually teach much.
And That Means Moving Beyond Pass vs. Fail
When we think of testing, we tend to think of two outcomes: pass or fail. But those two extremes are no longer helpful. Not only do they lack context, but they are not always completely right. A concept is made up of various pieces; it is frequently a combination of copy, visuals, materials, and the product itself; and each of those elements factor in to a respondent’s assessment. Furthermore, no concept wins by 100%, and certainly not across every metric. That is why the goal of your market research shouldn’t be to just pick the winners for prioritization. It is just as important to examine what didn’t score well and understand why, in order to develop holistic learnings that can be applied not just to your current development cycle, but your future endeavors as well.
Ensure That Your Iteration Is Truly Insightful
So how can you make sure that you’re evaluating your testing results for optimal understanding of what works and what doesn’t? Here are just a few small changes you can make for big impact on your analysis and its application to crucial business decisions.
- Look for similarities between wins and losses. The differences between your tested stimuli are probably pretty obvious to your research and brand teams. But once you’ve learned what your respondents like and don’t like about each concept, it’s important to see where the overall winners and losers overlap. This could help reveal hidden strengths in the poorly received concepts, as well as overlooked weaknesses in the preferred ones.
- Know when to be lean, and when to dive deep. One of the most important aspects of agile research is knowing when an insight is, well, insightful enough, and when to dig deeper into your target audience’s perception. When your testing is done, be honest about what improved executions can be generated based on the evidence you’ve accrued, and whether you require more from your respondents.
- Don’t forget to synthesize. Crucial to building an up-to-date wealth of customer intelligence is incorporating your existing knowledge into that which is newly acquired. Before presenting your report to colleagues and stakeholders, take the time to synthesize prior learnings with new ones in order to better understand how your results alter, reinforce, or negate what you already know.
Achieving insightful iteration means moving beyond determining which concepts pass and fail testing towards a comprehensive evaluation of strengths and weaknesses across all stimuli. Only then can you achieve true validation, refining your development with every step and study. To learn how Google stays ahead of an industry that moves at breakneck speed by failing fast and applying insight to iteration, check out the case study below.