Photo by Pixabay
Photo by Pixabay

The Question Looping Process: how to build great products by asking the right questions

If you want to build a great product, project, or service, then the “Meta Question” – “What don’t I know that I must know?” – should be your obsession. The answer to the Meta Question is itself a question (the “Product Question”). The Product Question might be: “Why do our best users return to our product again and again?” And to answer this question, you ask the Tool Question (“What tools in the tool belt will most effectively and efficiently answer this Product Question?”). These three questions will help you build something amazing that people truly want.

You can create your first “data loop” by choosing a tool from your tool belt (say, conducting a series of interviews with your most loyal customers) to answer your Product Question. After absorbing much of what this tool has to teach you about this question, you’ll be hearing the same answers again and again. Data loops are ephemeral, and once one diminishes in value, it has served its purpose.

The next step is to ask yourself again: “What don’t I know that I must know?” The answer yields your next Product Question, which leads to your next tool to create another data loop (say, reading many articles by experts). Once that stops being as useful, you can continue constructing other data loops using other tools (e.g., conducting a survey, watching people use your product, getting criticism from User Interface experts, studying the statistics of user behavior, attempting to sell your product at different prices). For each Product Question, you can consider the many tools in the tool belt (e.g., user interviews, expert feedback, surveys, studies, A/B tests, analysis of user behavior, etc.) and pick the one that is most suited for answering it. If you practice this, you’ll become a master of the “Question Looping Process.” 

The Question Looping Process:

Step 1: Ask the Meta Question: “What’s the most important thing that we need to know that we don’t know?” 

Step 2: Answer the Meta Question. The answer to it is the Product Question (e.g., “Why are users dropping out during our onboard process?” or “What are the triggers or motivators that get our power users coming back?” etc.)

Step 3: Ask the Tool Question: “Which tools in the toolbelt will most effectively and efficiently help us answer the Product Question?” For instance, it could be a survey of existing users, interviews with existing users, reading articles by practitioners, studying competitor products, analyzing the behavior of existing users, etc.

Step 4: Use the selected tools to answer the Product Question.

Step 5: If the Product Question is now answered, make product improvements and collect data to make sure that the question’s answer was correct (for instance, by doing A/B tests of the new product improvements or by doing further interviews after the product improvements were made). If the Product Question is not answered to a sufficient degree, return to Step 2 and re-ask the Tool Question. If you’re no longer confident that the Product Question is really the right question to be asking, return to Step 1 and re-ask the Meta Question. Or, if you’ve succeeded at making product improvements, you should also return to Step 1.


The Tools In the Tool Belt

Okay, but what are the tools in the tool belt? If you want to improve the quality of a product, it’s extremely valuable to be aware of all the tools at your disposal and to choose the right tool to answer each question you have. If you focus too much on just one or two tools, you’ll miss out on important insights. For each question that you are trying to answer about the product or content, you should think about which tool in the tool belt is best for the job, just like a mechanic must think about whether a hammer or wrench is more appropriate for what they are trying to get done.

Important note: a really common issue is that people start with the tools that they are familiar with or have easily available rather than with the questions they want to answer. They’ll think, for example, “We have customer data available, so let me analyze that to figure out what to do with our product.” The problem is that this often either leads to (i) focusing on whatever question you happen to be able to answer easily with the tool you chose rather than the question you should be trying to answer or (ii) trying to use the wrong tool to answer the right question. It’s much better to start with the important questions you have (e.g., using the question looping process above) and THEN consider what tools you have at your disposal to answer those questions, choosing the best tool or tools to answer the most important questions you actually have. If you have, for example, a bunch of data, there’s nothing wrong with spending some time pouring over it; maybe you’ll see something interesting. However, this undirected exploration is not a substitute for the Question Looping process.

With all of that said, these are the tools in the content developer’s tool belt. Learn to master as many as you can in order to have the greatest chance of making an amazing product!

1. Intuition/Experience

i. Leverage your own intuition and experience when looking at the product (e.g., if the problem is that people are dropping out of the onboarding process, maybe the reason will jump out at you when you look closely at the onboard).

ii. Adopt the user’s perspective! Really try to simulate the mind of the user while seeing the product or content with fresh eyes (take it in as though you’ve never seen it before). Ask, on each page, “What would a real user be thinking? What would they be confused by? What would they notice first?” Your success at this depends on your ability to (a) see the content with fresh eyes, as though you’ve never seen it before, and (b) really model and simulate the mind of a user.

iii. Seek team feedback. It’s usually best if one to three people give feedback at a time, using an agreed-upon format.

a. Consensus may be enough to move forward.

b. If you do not reach a consensus, you may want to consider:

1) Who has the most experience (with feedback) on this topic?

2) Who has the best design eye?

3) Who can best simulate the user’s mind?

4) Who has a track record of being right about this sort of question?

iv. Seek beta-tester feedback. Beta testers are people that are real users (or similar to real users) who are also motivated to help you improve; they tend to give more detailed and thorough feedback than most real users would. It can be really powerful to build a beta tester list and seek their feedback.

2. Interviews

i. Select people from the target user group who have never seen the product, then interview them about their experiences (e.g., ask them how they currently solve the problems that the product is designed to solve, ask them what would make them hesitant to try the product, etc.).

ii. Watch people in the target user group who have never seen the product use the product for the first time (e.g., watch them interact with the product and have them vocalize all of their thoughts and reactions while they go through the onboard process and try the product for the first time).

iii. Interview new users who have used the product for a short amount of time but who are not yet power users.

iv. Interview former users who used the product for a significant amount of time before ceasing to use it (e.g., ask them about why they stopped using it, what the product lacked, in what ways their goals or needs weren’t met, which features might make them want to use it again, etc.).

v. Interview power users who are getting the most value from the product and really love it (e.g., to discover what brings them back to the product and how it could be made even more valuable for them).

vi. Interview stakeholders (e.g., the people that buy the product for others, such as HR departments or others involved in distributing the product but who don’t use it directly).

3. Surveys (qualitative surveys, quantitative surveys, and mixed qualitative + quantitative surveys)

i. Survey people who have never used the product. 

ii. Survey real users within the onboarding process.

iii. Survey real users while they are in the middle of using the product.

iv. Survey real users who have used the product for a while.

v. Survey former users (who stopped using the product).

vi. Survey stakeholders (e.g., those who purchase on behalf of others).

4. Objective data and usage statistics

i. Analyze usage data from new users (e.g., how do new users interact with the product?).

ii. Analyze usage data from former users (e.g., how do people who stop using the product behave?).

iii. Analyze usage data from power users (e.g., how is the product used by new users who love the product and use it the most?).

iv. Use “alarm system” metrics (i.e., metrics that are designed to only change when something is very wrong with the product).

v. Use “success metrics” (i.e., the key metrics about the product that you’re aiming to improve, such as growth or retention metrics).

5. Reading

i. Learn what academic papers say about how to achieve X.

ii. Learn what articles by experienced practitioners or experts say about this topic.

6. Studies

i. Run a paid usage study (pay people to use the product for a certain amount of time; collect feedback throughout the process and/or include a feedback survey or interview at the end).

ii. Run a paid study on one small part of the product (e.g., have users interact with just a part of the product or a new user interface, then collect feedback).

iii. Run an A/B test to see which versions of a feature or page lead to better outcome metrics.

iv. Run randomized controlled trials (randomize some people to use your product (or to use one version of your product) and randomize others to a control group, then collect follow-up surveys or metrics after some interval).

7. Expert feedback (academic, practitioner, or paid consultant)

i. Collect feedback from experts looking at the product directly (or observe their reactions to the product)

ii. Ask experts to perform a pre-mortem (e.g., “Suppose this doesn’t work. What’s your prediction as to why?”).

iii. Ask experts “Would you do…?” questions (e.g., “If your goal was to take what we have now and make it better at achieving X, what would you do?”).

8. Behavior change frameworks

i. Use the Ten Conditions for Change (or one of the 17 frameworks summarized at the bottom of the Ten Conditions for Change website).

9. Market research

i. How does successful product Y achieve X? What are the successful products (e.g., products that are growing more quickly than others) doing well that we should consider emulating? What are the things that seem important to users that existing products are doing a poor job of addressing, and could this present an opportunity for us to address the problem?

ii. What do existing trends, changes, and market forces tell us (e.g., what’s the adoption curve on new technologies)?

iii. What do societal or publicly-available statistics tell us about people’s purchasing behaviors, other behaviors, and beliefs? 

10. Quality Assurance Testing

i. Ask team members to simulate being a real user.

ii. Standardize Quality Assurance (QA) test procedures (create checklists).

iii. Run bug bounty programs.

iv. Give current users an easy way to report issues.

v. Automate checks (e.g., Grammarly for spelling and grammar).

vi. Run copy editing passes.

At least some of these steps should be built into a standardized process when developing content. For instance, you may want to have every piece of content checked over by an expert at some point in the process, and you may want to have a standard quality assurance testing process.


Here’s an example standardized process that might be implemented for producing high-quality content:


This piece was first written on November 23, 2019, and first appeared on this site on August 26, 2022.



Comments

Leave a Reply

Your email address will not be published. Required fields are marked *