Also called: Cowboy Villages, The Button to Nowhere
See also: Dry Wallet
Requires existing audience or product
Relevant metrics: Click through rate, Conversions
How: Advertise the feature you want to implement before starting its development. Instead of building an entire feature, just add a button calling it out. When users click on the button, kindly explain that the feature is not ready yet and consider adding an open-ended questionnaire asking for feedback on the potential usefulness of the feature.
Why: Feature stubs allow you to test the demand for a feature without building anything at all. In addition to learning about the call-out's click rate, feature stubs provide great opportunities for collecting further qualitative data through micro-survey follow-ups.
This experiment is part of the Validation Patterns printed card deck
A collection of 60 product experiments that will validate your idea in a matter of days, not months. They are regularly used by product builders at companies like Google, Facebook, Dropbox, and Amazon.Get your deck!
Before the experiment
The first thing to do when planning any kind of test or experiment, is to figure out what you want to test. To make critical assumptions explicit, fill out an experiment sheet as you prepare your test. We created a sample sheet for you to get started. Download the Experiment Sheet.
Be nice and keep the experiment limited
Depending on your experiment design, there is a chance some users will feel as if they got “scammed” as they finally find out your fictitous product was a hoax. Some might not come back, ever. THis is why it’s vital to consider how many of these early users you are willing to play with - what is an acceptable loss?
This is why it is generally a good idea to start testing only on a small proportion of your target audience. Your long-term success should not depend on one single experiment.
Especially for large and well-established brands, this experiment comes with the risk of media exposure. As experiment participants gets involved in your faked feature test, established brands have seen users subsequently post about it in the media. Depending on your experiment, this exposure can backfire or reveal important strategic directions (or mis-directions). This kind of experiment comes with the danger of users expecting your experiment to be part of your actual, current, product offering.
Run through this check list before you run your Feature Stub experiment:
- Let users know there is no product. Make sure your experience ends in an explanation of the truth. It could be a message saying: “Thank you for letting us know that you’re interested, but we’re not ready yet”.
- Limit exposure. Consider running the test only on a subsegment of your entire audience. Popular solutions for this are:
- Limiting exposure via A/B testing tools.
- Limiting exposure to beta-testers who have given their explicit and informed consent.
- Limiting exposure server-side, sending a fixed percentage of the traffic to the server your experiment is running on.
- Collect email leads. It a cheap source of lead generation, but can also provide a great opportunity to later reach out to learn more from the users who were actually interested in the feature you are contemplating.
- Define metrics that matter. Common metrics for Feature Stub experiments are Clickthrough rates (CTRs), reaction from set percentage of registered users, or amount of new sign ups.
After the experiment
To make sure you move forward, it is a good idea to systematically record your the insights you learned and what actions or decisions follow. We created a sample Learning Sheet, that will help you capture insights in the process of turning your product ideas successful. Download the Learning Sheet.
- UX for Lean Startups by Laura Klein
- Fake-door testing: an 'evil' but efficient testing method by Tomi Mester
- 7 tactics lean startups need to build great products by Braden Kowitz