How many Interviews until validation?
Customer interviews play an important role in what we’re doing. If you know how to recruit interview partners for your purpose, it is one good and simple way to get customer knowledge without turning everything upside down right away. It needs to be said, though, that interviews are not interviews and can serve quite a different purposes on the phase the product is in. The valid (sic!) question, we get all of the time – people in our culture generally being schooled in quantitative insights mainly – is: “How many interviews do we need until our insight is valid?” The short answer right away: If do your job well, your insight is valid after one interview. It might just not scale. But read on for more.
Interview != Interview
To really answer the question, it helps to have another look on one of our favorite models: The knowledge funnel by Roger L. Martin. There are three main phases in the life cycle of a product, which bring with them completely different objectives, hurdles, tasks, ways of work and – possibly – cultures. In the upper part of the funnel, we try to discover new customer problems, which are worthwhile to solve. We look for value to create. We are still in the very vague, working on a hunch. We are looking for a valid problem of a small group and try understand the problem and context and problem as good as we can – hence the group needs to be small. Otherwise, the problem understood would be a compromise right away. In the middle part, we try to heuristically approach a solution of the problem. Heuristics are good now, as we don’t know the valid solution yet and we try to probe and sense our way to a recipe. In the lower part of the funnel, we already found an algorithm, a recipe to solve the problem. The better the recipe, the better we scale. Finally, we try to optimize the recipe to scale even better. Normally, reducing our offer means increase scaling – the product loses its vanity and its simplicity helps the product to stay in the back. (See also: “Good products lack vanity”) . We are now exploiting our solution to the problem.
To answer the question of “how many interviews …”, for different contexts, we take a slow walk down the knowledge funnel. We start on the top with a hunch (or even trying to get a hunch!) and stop at the bottom with exploiting the recipe.
Discovering valid problems and desirability
In the beginning of the conception of a new product, first a problem needs to be discovered which is worth solving. That means two things: It means a lot to someone (valid) and it means a lot to many (scaling, exploitation). The problem is: You need to start with validity and desirability before you scale. Finding out validity and desirability is completely about qualitative insight, not about quantity. You need to be able to see this and to work this way. You create value for some people first and make them happy clients and then scale it to lots of people. (Of course you can do all that behind the scenes and make a big splash right away (iPhone) but it is harder and more risk and the work involved remains the same quantity and type). The task during the interview is to get to know the customer and his context. By researching what he says and does, what feelings and thoughts he expresses, we try to find out which problems we can solve for him. When trying to come up with an interview guideline in this phase, we immediately realize that we can not come up with concrete questions. This is because we do not know enough yet. So, we get to know more by exploring the customers world by general questions which trigger his story telling to us: ”When you bought a car for the last time, how …?” or “Can you tell us how the discussion in your family went on when you couldn’t decide on …?”
In these interviews we let our empathy guide us and let the interviewee get into telling vivid real life stories which tell a lot about his whole context. Three to five interviews are a good session to start getting the drift. By mirroring the interview impressions to each other, we automatically get into pattern recognition mode – humans are good pattern recognition machines. The rough patterns from a few interviews are good saturated beginning. After that we get more formal in deriving needs and insights from what we heard. Normally, we now have a good impression of what’s going on in our researched area.
Now we have a hunch and a well derived valid problem, extrapolated from a small sample group or segment. We have a valid problem, needs and thus desirability for this solution and this group. All these are insights on a qualitative level that trigger us in finding the right problem to solve. But the main effect is that we now see the world from the customers’ perspective that was forced onto us by the empathy work we did in the interviews. This delivers a totally different vanishing point and framing to the whole task, a perspective we couldn’t ever have gained in the meeting room, left on our own. And this is the main value of these first research interviews.
Sometimes, we don’t do the interviews ourselves but let others do it and let them lead us through their impression, which does two things: The pattern recognition is a little bit like watching a movie (ambiguity helps to abstract out things) and additionally we are not in a danger to bring any solution bias to the interviewees. (Normally we let the interviewers in the dark about the solution we have in mind.)
Will Evans (@semanticwill) came up with the following brilliant slide on the correlation between the number of interviewees and insights: No interview – no insight, many interviews – decreasing number of new insights.
To repeat: In this phase, the whole thing works with few interviews because it is about discovering (valid) problems and simply a forced change of perspective.
Heuristically approaching a solution
In the next phase, we can start to envision ways to solve the problems we have discovered earlier on and to then refine them in iterations (the quicker, the better – a main differentiator to phase 1 where depth matters more than speed). Now questions we can ask in the next interviews come to mind quickly. We will have many hypotheses about the context that drive questions around solutions, details, aspects, and alternatives etc. This means the next interviews will be different and much more detailed. Again, we don’t need many interviews per session; also the interviews now can be much shorter and less general. Prototypes of solutions to be shown now still need to be very Lo-Fi, rough and raw. We still need help and honest, open feedback, which doesn’t come easy with glossy, highly polished and detailed prototypes. People are friendly!
When there is success
At the lower end of the Knowledge Funnel the work again changes completely: We validate existing recipes and optimize them based on feedback. We can now ask very concrete questions up to task completion (“Try to buy a Mercedes using this flow, please!”). In this area we have a high level of comparability between interviewees and interviewee groups so that we can really evaluate if the recipes quality increases or degrades with the changes we test. We can also recruit interview partners with a relatively high precision, as we now have a good knowledge of our segments. Finally, we are back in the good old world of quantitative research, numbers and reports. We all know how to work here – the problem is we don’t normally know how to work in the upper part of the funnel and screw it all up. Now prototypes need to be Hi-Fi, to express as much context of our recipe as possible. But not every question works now: If you test a flow, the test makes no statement on the question if the flow works in the context of the whole product. The nice thing about UX testing is that they are concrete, easy to design and more easy to do. So easy, in fact, that you can seamlessly integrate them into your development flow.
Perfecting the recipe
Even easier is A/B-testing which is perfectly valid if you have good continuous deployment in place and the overhead cost for it is low. It then gives you a chance to test alternatives in high numbers – the dream of any quantitative method and the most reliable and concrete method of validating small changes. (Nothing more and less.)
It is important to be conscious about the choice of methods, as each of them serves a different purpose in a different context. Insights derived from a UX test cannot give you insights on new problems to solve in the large and new product opportunities. We need to clearly distinguish between discovery and validation of a problem on the one end versus optimizing the recipe on the other end. Friendly feedback and our own solution bias are the worst enemies.
The following chart shows the flow of the interview methods along the knowledge funnel:
A final bummer: UX testing won’t bring you clients if you haven’t already cracked the problem the right way. UX testing only increases your leverage of a good solution. What we observe is that many companies bring in UX testing as a silver bullet towards success when they haven’t yet solved the problem. Wrong place, wrong time: Another misunderstanding that leads to UX Utopia.
Title picture: Some rights reserved by robinkristianparker - flickr