Skip links

Product Playbook

Solution Validation

It’s crunch time! Put your prototype to the test and find out if the solution you have created to address an opportunity identified in the research phase is receiving positive signals in the market.

In the validation stage you typically want to answer one or more of the following core questions:

  1. Validation of the product UX: Is the product usable and helping the user with his problem? Can [userflow A] more engagement than [userflow B]?
  2. Validation of the product desirability: Is the product highly valuable to the user and is he willing to pay for it? Does [feature X] add value?
  3. Validation of the market demand: Is there enough demand for the product? Can [design/flow/copy A] generate more demand than [design/flow/copy B]?

As discussed in the previous chapter, in some cases (1) might not be a digital prototype: for example for eCommerce stores you want to collect user feedback on the physical product.

Validation Methods

To validate your solution there are many frameworks and concepts available. The most common ones are:

  1. Usability Test: Get real people to interact with the prototype you’ve built and observe their behavior and reactions to it.
  2. A/B Test: Show two (or more) variants of the same web page to different groups of users at the same time and compare which variant drives more conversions or engagements.
  3. Survey: Run a survey to get quantitative and/or qualitative feedback about your prototype.
  4. Smoke Test / Fake Door: A simple landing page that clearly illustrates your value proposition with a call to action to determine if your solution resonates with your customer segment. “Fake door” means once the user indicates his interest in your solution, you notify the user that your solution is not available yet but he can leave his email address to get notified once it is.
  5. Staged Rollout: Your solution is only released to part of your users to collect initial data, which then helps you decide if it is worth rolling it out to all your users.
  6. Concierge MVP: Offer users your product/service manually and by real humans, instead of through an actual digital and automated version.
  7. Wizard of Oz MVP: On the front end, you deliver the impression of a completely functional product to the user; however, on the back end of the product, you have to execute all orders manually.
  8. Letter of Intent: Get a written agreement from a B2B client that the intent to use your product once it is built.
  9. Presale or Crowdfunding: Sell your product/service before it is built or produced.

Which validation method(s) should you choose?

For every experiment and decision-making, there is an underlying hypothesis, which starts with an observation and some data points you collected in the Research stage. Pick the right experiment by asking the following questions:

What type of hypothesis are you testing?

Pick an experiment based on your major learning objective. Some experiments are better to test user desirability while others are better to assess solution feasibility or viability.

How much evidence do you already have?

The less you know, the less you should waste time, energy, and money. When you have high uncertainty look for experiments that give you quick and cheap feedback to point you in the right direction.

Increase the strength of evidence with multiple experiments for the same hypothesis: Don’t make important decisions based on one experiment or weak evidence.

How much time do you have until the next major decision point?

Consider what are your milestones and by when do you hope to achieve them.

Pick the experiment that produces the strongest evidence given your constraints (time and money).

Learn more:

Validation Metrics

How can you measure the success of your experiment?

First, make sure you defined the hypotheses you want to test and the key metrics to measure them. Then, define the target values you want the experiment to reach in order to classify the hypothesis as validated.

Here are some good hypothesis phrasings for inspiration:

We believe that [ creating this experience / solution ] For [ persona ] Will achieve [ the outcome ]

We will know this to be true if […]

We believe that, by offering a free test package, more users will subscribe to our product.”

“We believe that, by improving the content in the website, the users will better understand us and will decide to subscribe to our product.” 

“We believe that, by providing flexibility to cancel the subscription, the users will feel more comfortable to subscribe to our product.”

Make sure for quantitative tests all event trackings are in place and working before you run them.

In case your metrics do not look great, discuss further options with your team:

  • Consider pivots or modifications → did you come across another relevant user problem worth solving? Did you notice what affects users to behave a certain way? Always listen to your users and iterate.
  • Kill the opportunity completely and investigate something new

Stopping an idea at this stage is actually nothing bad, you might have saved months of building an actual solution and selling something users don’t want!

But: Make sure you tried everything you could and excluded potential execution errors to ensure you are not falsifying the solution hypothesis too early.

List of potential metrics

  • demand / # sign-ups
  • good conversion rates (compare with market benchmarks) at acceptable acquisition cost
  • a signed letter of intent(s)
  • the willingness of users to switch from a competitive solution
  • positive indication on the willingness to pay or willingness to use
  • excellent qualitative feedback
  • time to complete a certain activity
  • # times feature X used
  • qualitative feedback old vs. new design
  • A/B test: differences in sign-ups, conversion rates, usage time, etc.

How do you know if your sample size is significant?

Use the following websites to calculate the number of data points you need for validating your hypotheses:

Usability Tests

While usability can be validated quantitatively and qualitatively, typically interviews are the most common and leanest approach:

The key for insightful usability tests is the creation of structured interview guides in advance. As we already elaborated earlier in the User Interviews chapter, how you ask questions can make all the difference in what you get out of your interviews.

Here is an example that gives you an impression of how questions could be asked when taking a user through your prototype:

Show screen 1

  • What is your first impression?
  • What do you think that this is about? 
  • What do you expect to see when you click? 
  • What would motivate you to click? If not, why wouldn’t you click

Show screen 2

  • same questions than for screen 1

Click to go to the next page 

  • Is this what you expected? Why or why not? 
  • What would your next step be? 
  • What questions do you have about this product? 
  • What impression do you have about the product being offered?
  • What motivates you the most on this page? Why? 
  • What motivates you the least? 

Click on “Buy now with 50% discount”

  • Is this what you expected to see? 
  • Please comment on the subscription options you see.
  • How do you perceive the price for the product?
  • What would drive your decision to buy? What would you need to know?


  • How do you rate your overall satisfaction with the product?
  • How would you rate the efficiency and ease of use of the product?
  • What did you miss? What would you expect or like to see differently?

You can also ask the user to complete a certain task and observe how easy and fast he is able to do it without your help. Let him think out loud his thoughts while doing so.

How many data points should you collect?

5-8 interviews per prototype are fully sufficient to detect patterns and come to conclusions and action points.

For quantitative tests, the magic number is about 20.

Further Resources