The Challenge
Helping people understand the difference between why consumers say things but don’t do them is an important part of the job for a researcher. The limitations of “say” research need to be understood and used appropriately with “do” metrics
The prevalence of digital customer journeys has provided much data on the journey between a stated intention to trial or purchase a product (Consideration) and a sale (product purchased). Items like webpage landings, product enquires, applications and ultimately usage fill in the blanks between intent (say) and action (do).
While most researchers understand the difference between the measures of say and do, most businesspeople (clients) do not. Unfortunately, I think this has led to the value of researchers and research to be undermined.
The simple logic used to assess the value of research is that it is thought that researchers ask people what they think and will do, in survey or in qualitative research. There is a belief that researchers do no assessment or screening of say versus do in their analysis. The media narrative on recent debates about political poling and election results in recent years or the cynicism about “focus groups” reinforces this view.
Unfortunately, this perception can be reinforced by a few things such as when researchers are asked to look at a particular issue and they only have access to survey metrics & qual research while a different type of knowledge worker e.g., a data analyst has access to internal behavioral data. Role types and differential access to data assets can make synthesis of results difficult in large organizations.
It is for these reasons that I always talk about research as being evidence based, because evidence can come in all different forms and modalities, and researchers should peer through the lens of all data types in understanding an issue. There is rarely one unbiased source.
It also worthwhile highlighting while metrics like stated intention (say) and sales (do) differ. People seem to expect them to say the same thing but as a researcher I am surprised by that.
While I expect to find a generalizable truth in the data, I don’t expect that say and do metrics will suggest the same result all the time. In order explain why they would not say the same thing, one needs a framework to do this.
The Behavior Change Wheel – The COM-B Model
A great framework for understanding the say-do gap is the COM-B Model, The Behavior Change Wheel. The COM-B model is a synthesis of 83 behavior change frameworks from the literature into one model. It was originally designed for health sciences & public health with the purpose of trying to improve people’s preventative health behavior.
It’s application to the say-do problem is twofold. The three main dimensions of the model, Capability, Opportunity & Motivation provide reasons why say may not translate into do. The second Use case follows on from that in designing interventions to encourage different behaviors. I have used this model several times to explain why people may not behave a certain way in a specific context and timeframe.
The three dimensions of the COM-B can be further broker down:
- Opportunity consists of the environmental context & resources and social influences.
- Capability consists of knowledge, cognitive & interpersonal skills, memory, attention & decision processes, and behavioral regulation.
- Motivation consists of reinforcement, emotion, Social & professional identity, beliefs about consequences, optimism, intentions, and goals.
Thinking about a situation where I often see a difference between say and do are where the brand & marketing metrics such as trail intention (consideration), ad-recall and marketing activity may have increased or declined, and product adoption (sales) might not behave as the marketing metrics may suggest. How have I found The Com-B model useful to explain the difference?
In terms of Opportunity a lot of trail intention metrics are asked of everyone, or even people that indicate they maybe in market for a new product. Different products have different purchase cycles. In banking for example, people tend to purchase a new product on average every 4-5 years, meaning that Consideration is being measured in a group much larger those who are in market to purchase.
Further, if the customer is being advised by another person their input into the decision is influenced by others. In Australia for example, Mortgage Brokers help approximately 60% of all new home loan applicants find a home loan, up from approximately 40% 10 years prior, highlighting the relevance of advice and other influences. People do not ring their broker when the fill out a survey!
Opportunity is also relevant to the seller. I know recently because of COVID related supply chain issues many people to tried to purchase goods and services only to find the order could not be filled in a reasonable time frame.
In the case Capability, knowing about a particular product offer can be challenging. Most marketing campaigns, even successful campaigns have levels of effective reach of between 30-60% depending on size of the campaign and relevance of the offer highlighting many people do not know about a particular offer.
If the product or offer is predominately sold in digital channels, there is still a large group of people who wish to use other channels for a range of reasons.
Decision making processes and cognitive skills are also relevant, in many categories the amount of information available to analyse so as consumers can make choices, even from a limited set of brands can be complex and time consuming. This time and complexity are usually the reason consumer may outsource the decision and take a recommendation from product choice website or just take the most convenient choice.
One of the ironies of digital purchase environments is that while salespeople are taken out of the process via automation and product information, often consumers appreciate it if a person can advise the customer and make the process easy.
Motivation is also relevant to purchasing decisions. Often an incentive motivates someone to act enables the decision to enter the market and purchase and well as influence the choice away from what may have been in the original “consideration set” of brands.
Of note is that for many of these dimensions, I have usually been able to find evidence to support these different dimensions in qualitative research on customer journeys often commonly found in project archives. Further I have usually found quantitative data points available to verify these dimensions as supporting evidence as to why metrics on say versus do may not align.
More importantly, the analysis leads to a greater understanding of the purchase journey and can unite various parts of a business (Marketing, Product & Channel) who often have different reference data sources, to solve friction in the customer experience and help position the role of research in any project or company.