Books, training, and other advice coming from non-CX sources often advise that – especially if you are starting out – you should interview anybody you can find. Or interview whoever is “easiest.”
If we care about the rigor of doing research well, and having great evidence and data, then this is poor advice.
Recruiting Participants
Recruiting is selecting the right people for your study. This might be by demographics, user type or profile, how long they have been a customer, if they are a competitor’s customer, or other behaviors. Some studies only want to talk to people who shop on a certain site weekly. You might want to speak to non-customers to learn more about their tasks and needs.
This is why CX and UX rarely grab coworkers, random people in coffee shops, or whoever is “easiest” to find. The easiest people to find might not be a well-recruited audience. Observing or interviewing the “wrong” people works against our desire to be informed by good evidence and data.
The smaller the sample size, the more attention we have to pay to making sure that each person is a good representative of the persona, segment, or profile that they represent.
Mistake: Only Talking To Happy Customers
Another common research recruiting mistake often made by non-researchers is only or mostly talking to currently happy customers. If the advice is to interview whoever is easiest, there is often nothing seemingly-easier than talking to someone happy with your company or its products, services, and experiences (PSE for short).
If we want to improve our PSE, then we need to hear the complaints, concerns, frustrations, and disappointments. We need to hear them directly from customers. We need to watch them using our PSE so that we can see the negatives even when people are polite. Many customers will tell you something is “OK” or “easy,” but when we watch them struggle, sigh, and hit dead ends, actions speak louder than words.
We at Delta CX once had a research project for a famous company. They admitted that when they did their own studies, they usually only recruited happy customers. They didn’t want to hear the negatives. They allowed us to meet their B2B side who were still customers but dissatisfied, which opened up a huge and new world of possible improvements and innovations.
Conclusion
We must be careful of the advice that is out there. Start by checking the source. If the person teaching or advising on CX or UX research has little or no background in CX or UX research, ignore that advice. It might not be accurate.
Only speaking to happy customers fuels our confirmation bias; we want to hear we’re great, so we talk to people who agree we’re great.
As you transform toward customer-centricity, it’s easy to be inspired by “continuous discovery” and other books or training that can make it sound like “every worker should do every task they feel like doing.” As we care more about quality – and use more internal governance models to monitor for our standards – we should shift away from advice that keeps us in cycles of poor habits, culture, and outcomes.
There’s data, and there’s good data. Let’s raise our standards for research and the techniques we use to better understand our target audiences.