Most organizations and individuals are fairly well versed in the first part of the definition – “studious inquiry or examination” – thanks to their early exposure to gathering data, either via desk research or a primary market study or survey.
However, to get the most relevant and actionable information from this research involves the second part of the definition, which includes the “interpretation of facts, revision of accepted theories or laws in the light of new facts, or practical application of such new or revised theories or laws.”
Despite conducting what they consider to be rigorous and unbiased research, investors may look at a favorite stock that missed its earnings target, and simply look at the overall industry’s performance declines, rather than investigating signs of weakness within the company. Sports executives may give too much weight to qualitative research (scouting), and not enough to statistical research and analysis, because it fits the organizational mold. Business owners may see declining unit sales and attribute them to more competition, rather than any flaws with their own pricing, product, or service model. In each case above, a pre-determined “answer” or range of results has taken precedence above fairly conducting the research so that the results are as unbiased as possible.
Indeed, this action is often the result of confirmation bias, a phenomenon that leads people to seek out information that confirms their existing opinions, while overlooking, ignoring, or rejecting information that refutes their beliefs. But confirmation bias can rear its head in far less obvious ways during the research process. By cherry-picking, framing or interpreting the results of research to simply confirm existing beliefs, significant damage can be inflicted on a product, service, brand or company.
Several years ago, I was conducting research for a conference that had amassed a very large and happy delegate and sponsor base. However, a new company acquired the conference, and wanted to quickly raise prices to attend and sponsor, to bring the event’s financial model in line with the many others it produced. During the research process, past attendees and sponsors praised the event, and consistently cited the quality content and reasonable cost to attend as a key reasons they liked the event, since it meant that even small companies could afford to come, therefore making the event a true industry meeting place.
However, the conference management chose to categorize these statements as “irrelevant,” and “self-serving,” and focused on the “high quality” of the content as a justification for steeply raising prices. While conference-goers certainly were looking for quality, ignoring the other comments about price, and the benefits of making the conference accessible to as many people as possible, ultimately hurt the long-term viability of the event, with declining attendance, sponsorship, and revenue over the next several years.
What’s the lesson here?
When conducting research with clients, customers, or prospects, it’s important to quantify and qualify responses, in order to ensure that the data is less likely to be misconstrued or shoehorned into a pre-determined narrative. Ask follow-up questions to gauge the impact of other variables on each answer provided, so that each response can be put into context.
Most importantly, when analyzing the results, consider alternate ways of interpreting the information that may not jibe with preconceived notions. Allow dissenting voices within your team to be heard, and don’t hesitate to follow up with additional research if necessary.
In the end, research is conducted to facilitate a better understanding of a particular issue, and then used to provide guidance in making critical decisions. By letting the unbiased results of a carefully planned and executed research strategy lead the decision making, the resulting strategy will be market-driven, rather than driven by preconceived notions.