To ensure that the information you’re getting from the IDIs is both relevant and useful, it’s important to divide your base of interview subjects into groups that are representative of your target audience.
For example, consider a home improvement retailer looking to better understand customer shopping drivers, with the following customer mix:
· 65% pro contractors
· 15% hardcore DIYers
· 15% weekend home-improvement enthusiasts
· 5% casual customers
To get the most actionable information and insights, the percentage of completed IDIs from each segment should mirror the actual customer mix percentage. Depending upon the type of data needed, further segmentation into sub-groups by seniority, decision maker versus influencer, or other criteria should be executed, so that an appropriate mix of respondents can be achieved.
The construction of an interview grid, which lays out interview attempts and completes by each segment (e.g. customer type segments listed across the top, with sub-groups listed vertically), is an effective tool that will ensure that the researchers not only reach out to the correct respondents, but also structure their interviews strategically.
For example, consider a researcher that focuses solely on completing interviews with people from the first segment containing pro contractors, moving on to the next segment only when the number of potential respondents from the first group has been exhausted. By structuring the interview schedule in this manner, more than half of the total number of required interviews would be completed before speaking with any other type of customer (assuming that the researcher is adhering to the proper segment-mix ratios), given that 65% of the respondent base is included within that first segment.
Structuring the interview process using a planned interview grid allows for a more organic interview process, with completes staggered between segments to allow any insights that are uncovered in interviews from different segments to be addressed across the entire audience. While it’s possible to cover new topics during backchecking, it is a significantly less efficient way to ensure all relevant topics are covered by each segment.
Adjusting the Discussion Guide
Indeed, IDIs are not designed to be statistically significant (given the small sample size), so it’s entirely appropriate to modify the topics to be discussed or questions posed as the research process unfolds. Researchers should review and modify the discussion guide to ensure that any new areas or topics for discussion are covered in each subsequent interview, which will likely lead to a larger number of topics than originally developed.
However, it’s also important to continuously check to make sure the topics being raised by respondents are still relevant to the overall research plan. Dwelling on or readjusting a discussion guide to follow topics that are either tangential or loosely relevant to the majority of respondents can lead to spending a disproportionate amount of time or effort researching these issues, rather than delving more deeply into the most relevant topics.
If the process is followed properly, the initial list of topics covered likely will have swelled greatly, with dozens of sub-topics or thoughts gathered during each interview. To make sense of the insights, it’s important to devise a specific coding mechanism to allow for a rigorous analysis at the end of the process.
When capturing IDIs (either live or via a recording), it’s also a good idea to code each response with a unique number corresponding to the respondent, as well as a letter or letters indicating specific variables, such as seniority or customer type. This will make it easier to analyze the results, as well as get basic (though non-scientific) insights into the frequency of similar responses across the base of respondents.
For example, consider the following coding scheme: (a) pro contractor, (b) hardcore DIYer, (c) weekend home improvement enthusiast, and (d) casual customer. Meanwhile, a secondary ($) code would indicate a decision-maker, and a (<) code would indicate an influencer.
So, responses from the first interview might be coded with a (1a$), indicating a pro-contractor with decision-making authority, whereas the next interview would have each response coded with a (2c<), indicating a casual customer with only purchase influencing power.
The point behind this seemingly complex coding mechanism is to make it much easier to classify and categorize information after all the interviews have been completed. Without having to read through each interview, a piece of information can then be grouped on a “tree” with other relevant information, without losing the attribution, which contains the respondent’s position on the value-chain, seniority level, or other variable that will help the researcher analyze and report the results.
Next week, we’ll conclude this topic with some strategies for basic analysis and reporting of the findings.