Practical Magic: The Inside Scoop on Using a Video Insights Platform to Tell Consumer Stories

GutCheck recently hosted a webinar with video insights partner Voxpopme to discuss how video can be used to tell impactful stories about consumer preferences and behaviors. 

Videos give a deeply personal view into the world of consumers in which you can pick up on genuine emotions, facial expressions, and sentiments in a way that text-based responses and focus groups can’t always convey. When you launch a video study, respondents are asked questions or told to perform tasks and then record their answers or actions via video. The tool then assembles clips based on specific themes or sentiments that arise. Telling a story through video helps stakeholders and decision makers know how to solve a problem or move forward with a product innovation. 

At the end of the webinar, experts Arianne Latimer, strategic account director for Voxpopme, and Brian McCarthy, senior product manager at GutCheck, weighed in on a participant Q&A session to discuss the nuts and bolts of using the video tools.

This discussion has been edited for clarity.

 

How do you manage poor video quality? 

Arianne Latimer: When videos come through the Voxpopme platform, we do quality checks on each one to make sure the respondents are visible and audible, and that they’re answering the question to the best of their ability. Videos that don’t fit those categories get declined, and you don’t pay for them unless you want to access them for whatever reason. The videos you actually get to work with are high quality. For example, there are times when we can’t control how respondents recorded the videos, so our tool has a sound normalizer to bring all the videos up to the same volume so that everything will play at the same level. And using best practices with regards to how questions are crafted will help contribute to the quality as well.

 

Is sentiment analysis just based on text transcription or does it take into account sarcasm? 

Brian McCarthy: There is a limitation as far as how text analytics are performed and automated. The tools will specifically pick up on keywords like “I like” or “I love” or “I hate,” and that’s the level where sentiment analysis is done. But before a video goes into analysis, a researcher will have viewed it, listened for tone and sarcasm, and pulled that stuff out. While sentiment analysis is what we use to get to things quickly, it’s the human analysis that will allow us to find things like sarcasm. This is the great advantage of video in that we get to see the face and hear the voice and tonality, and that’s how we craft the story.

 

Is this done via a cell phone or PC or both? 

BM: It can be done through both. There are certain activities that lend better to a smartphone. When that happens, we’ll typically screen for respondents with access to smartphones and inform them that there’s an activity that will require them to record via a smartphone. But you do have the option of people using PCs to record. 

AL: The good thing about using video as an agile qualitative tool is that you’re not limited. So long as a respondent has a computer, the tool is flexible in that way.

 

Have you found that respondents have a tendency to be polite when they have their face on screen? 

BM: It depends on what type of questions you ask. One thing you can do is ask if there’s something a respondent dislikes, or how would they improve an idea. So, people can take the gloves off and say what they feel. 

AL: Respondents feel like they’re participating and have a voice, so while I wouldn’t say respondents are mean, they’re actually just real. It’s an opportunity where they get to share what they think and feel. We’ve done some research asking respondents what they like about recording video, and the most common thing they say is that they like being heard and having a voice in this process. So again, that’s where the quality of the video comes in, because they do feel like they can share what they think, whether it’s positive or negative or just really personal.

 

I’m trying to reconcile using video for concept feedback and also limiting the number of videos. If you’re going through several concepts, do you have respondents just focus on their favorite or least favorite? 

BM: There are a couple of different ways to use videos for concept feedback. If you’re doing monadic testing, everyone gets one video and that makes it very clean. If you’re doing sequential monadic testing, we recommend two or three concepts, because if you do more than that people start to get fatigued on the concepts as well as doing videos for each concept. If you do two to three, we’ve found the results are quite good, especially if you’re doing an AB test since the compare and contrast is really helpful. But the other way you can do it is to limit the number of videos you’re collecting. If you have a sample size of 200 and you only ask for 30 videos for each concept, you can get different respondents sharing videos for each concept. That allows us to get enough videos and reactions to directionally provide feedback without taxing respondents as much.

 

Regarding GDPR [General Data Protection Regulation], what are the limitations around sharing PII [Personally Identifiable Information]? 

BM: Because PII is quite sensitive, especially when you bring video into the context, we do not share data. We effectively isolate the video as its own data source and don’t include the person’s name, age, region, or ethnicity. This allows you to collect and analyze data without having issues with other information that can tie to PII. It’s an evolving area, and we do consult with legal teams quite frequently as far as what the limitations are and how we should consider them based on a country’s regulations.

 

Any best practices on defining a sample size? 

BM: One thing we’ve found with video is that we like to use it qualitatively and directionally to augment a qual or quant study. So we’ve found that a good sample size for video is about 30 respondents. It provides enough directional feedback and gives a solid idea of the overall sentiments coming across without such a large volume of videos that are difficult to analyze. There also tends to be a lot of dropoff with video questions because people can’t record in the environment they’re in or at a certain time. So about 30 per question is a good size for in-depth and robust responses, which are much more beneficial than typical open-end responses. 

AL: One thing to take into consideration is that not everyone taking your quant survey will opt-in to video questions. The simple reason is that they don’t have the tech enabled on their device or are not in a location where it’s appropriate to record video at that moment. If you focus on using it in these qualitative, directional ways, then you know your entire sample size isn’t necessary for capturing those videos. And one reason to use video is to get to a “wow” moment, even if you have fewer than 30 respondents. Sometimes it only takes two respondents to get to a person who put it so articulately and really helped shine a light on the problem to solve. Having more than that can also give more options and a wider range of stories to pull from. But don’t worry about capturing everyone on video; the goal is to get deeper and more personal.

 

What are typical turnaround times? 

BM: To complete a video study, it takes about a week. It takes a day or two to figure out your objective, craft video questions, and go back and forth on how to ask questions in the best way based on what we’re trying to get out of the study. Then we field the study, and depending on how difficult it is to reach the audience, it can be 24 to 72 hours or more to collect all the respondent videos and filter through the good and the bad. Then the analysis takes about a day to turn around. Although a week is typical, it can sometimes go faster, but it’s contingent on how difficult the audience is and how many questions we’re creating stories for.

 

How large are the videos? 

BM: At GutCheck, we host all the videos; you don’t have to worry about that. Camera sizes are getting bigger so resolution is getting higher, but the actual video output — the highlight reels we put together — are about a minute and a half and are 30 megabytes, so they’re not that big as far as sharing goes. We make them shareable via a link. 

To listen to the full discussion, watch the full webinar below.

Watch Webinar