Idea Validation: Problem, Market demand

Fake door testing

Pretend to provide a product or feature without actually developing it

Illustration of Fake door testing
Run a Fake door testing play

How: Instead of setting up expensive custom integrations and partnerships, fake it! Build only what is absolutely necessary to advertise your product to real users while faking the rest.

Why: This is a quick and easy way to validate interest in a feature without actually building it, but implementing exactly enough for it to seem real.

What is a Fake Door Test?

A Fake Door test (also known as a painted door test, 404 test, or landing page test) is a product experiment where you pretend to offer a feature or product that doesn’t actually exist yet. In practice, it means creating a door that looks real to the user - for example, a button, link, or landing page for a new feature - and tracking how many people walk through it (click or sign up). Since nothing is built behind the scenes, users who go through the “door” are typically shown a coming soon message or signup prompt instead of a functional feature. This technique is a form of pretotyping (pre-prototyping) - it lets you validate an idea through genuine user interest before investing in development.

In essence, a fake door test allows you to gauge demand based on real user behavior. By measuring how many users click an enticing entry-point for a feature that isn’t actually there, you collect evidence of interest (or lack thereof) without building the full product. It’s a quick and cheap way to answer the question: “Do people actually want this?” - using behavior instead of opinions. As one guide puts it, people are bad at predicting what they want, but great at reacting to new offers, so “building just the entry to an offer” lets you predict future reactions based on actual behavior.

Why Use a Fake Door Experiment?

Fake door testing is all about validation and speed. It serves an important purpose in product discovery and user research:** to validate demand for a new idea or feature before you build it.

Key reasons product teams use fake door tests include:

  • Measure Real Interest: It collects tangible, unbiased demand data based on what users do rather than what they say. For example, clicking a “Try Feature X” button is a stronger signal than verbally saying “I’d use Feature X.” This behavioral data is more reliable than survey responses, which often carry bias or hypothetical answers. In short, fake doors yield actual behavior metrics (click-through rates, sign-ups) instead of opinions.
  • Fast and Cost-Effective: A fake door experiment can be set up very quickly and cheaply. You don’t need to build the actual feature - just the façade (a UI element and maybe a simple landing page). Because it requires minimal engineering, you can run the test in days and get results in as little as a week. This makes it possible to test many ideas rapidly. It’s far cheaper and faster to discover now that hardly anyone clicks your feature idea than to spend months building it and have it flop.
  • Early Product Validation: It allows teams to validate product concepts early in the development cycle. By running a fake door test, you ensure you’re basing strategic product decisions on evidence of demand. If the test indicates strong interest, you have justification to proceed. If not, you’ve avoided wasting time on a feature people don’t want. In lean startup terms, it helps you “build the right it, before you build it right”.
  • Broad Applicability: Fake door tests can be used in various scenarios - from testing if a completely new product concept sparks interest, to gauging if an additional feature is welcome in an existing product. For a new product idea, you might create a promotional landing page and see if visitors sign up or click “Purchase.” For an existing app, you might add a new menu item to see if users try to use a feature that isn’t actually there. In both cases, the goal is to validate user interest before committing resources.

Takeaway: Fake door testing lets you base decisions on user behavior. It’s a method to test the waters for a product idea using a mock offer, and it can save you from building the wrong things. As a market research strategy, it’s considered a type of “smoke test” - a quick experiment to see if there’s “fire” (real demand) behind the “smoke” of an idea.

How to run a Fake Door Test (Step-by-Step)

Running a fake door experiment involves careful planning and execution to ensure the data you collect is meaningful. Follow these steps to design and run a successful fake door test:

1. Define your hypothesis and success criteria

Start with a clear, testable hypothesis about your idea. What key assumption are you testing? Frame it in a way that can be proven or disproven. For example:“We believe that adding Feature X will interest at least 5% of our users enough to click its button. “_Writing down your hypothesis forces you to clarify what you expect to happen. Include a success metric - e.g.“at least 5% click-through rate (CTR)” or “100 sign-ups in two weeks”_. This sets a threshold for how you’ll interpret results. (Avoid vague or tautological hypotheses like “Customers will love this feature” - if it’s not specific or if it’s already true by definition, it’s not a valid hypothesis.)

A good hypothesis might look like:“If this feature idea is valuable, at least 5% of targeted users will click the fake feature prompt, and 20% of those who click will sign up for updates.”

2. Choose your fake door implementation (internal vs. external)

There are two common ways to present a fake feature offer:

  • Internal fake door (in-product): If you have an existing product or user base, integrate the test within your product’s UI. For example, add a new button, menu item, or banner advertising the feature. Make it look as if the feature is available. When clicked, it might lead to a “Coming Soon” page or a popup saying “Thanks for your interest - this feature is not ready yet.” This approach targets current users and asks, essentially, “Would you use this new feature?”. It’s great for testing additions to an existing product (e.g. a new mode in your app, a new settings option, etc.).
  • External fake door (landing page + ad): If you want to test a new product idea or attract potential users, you can create a standalone landing page or website for the fake product/feature. Drive traffic to it through means like Google or Facebook ads, email campaigns, or blog posts. The landing page would describe the product/feature and include a call-to-action (e.g. “Sign up for early access” or “Buy now”). Users who take action can be shown a message like, “Thanks for your interest! This product is coming soon.” This approach is essentially a landing page smoke test - it gauges interest in a concept among a target audience that may not know your brand. It helps answer if a particular market segment is interested enough to click or sign up for your offering.

Be realistic in presentation: However you implement the fake door, present it as naturally as possible within your product or marketing so that users would react to it genuinely. The fake feature entry point should resemble how a real feature would be advertised or placed. Don’t oversell it with giant flashing banners (unless that’s actually how you’d promote it) - you want honest behavior.

Also ensure it’s clear enough to notice; if it’s too hidden, you might get false negatives (users don’t click simply because they never saw it). In an internal test, pay attention to the element’s placement - button placement or wording can strongly influence interaction rates. In an external test, craft the landing page and ad copy carefully - the wording of headlines and the value proposition will affect whether people click through or sign up. (Tip: Consider A/B testing different copy or designs to see which draws more interest.)

3. Build the minimum viable “fake” experience

Now, implement the fake door in a way that captures user interactions. This typically involves:

  • Creating the trigger: the UI element or ad that users can interact with (e.g., a button labeled “Try the Beta”, a link in a menu, or an online ad). Make sure it looks credible and consistent with your product’s style.
  • Setting up the follow-through page or message: what the user sees after clicking. This could be a dedicated landing page or simple modal. Common approaches include a polite “Coming Soon - this feature is under development” note or a prompt that asks for the user’s email to notify them when available. For example, you might say: “Thanks for your interest! We’re still working on this feature. Enter your email to get notified when it launches.” This not only gently lets the user down easy, but also gives you a secondary metric (how many bothered to leave an email) and even a list of beta-interested users.
  • Instrumentation for measurement: It’s critical to bake in analytics so you can track results. If it’s on a website or app, instrument the fake button with an analytics event or use an A/B testing tool. For landing pages, use analytics or even simple counts (number of clicks, form submissions). If you run ads, the ad platform will give you data on impressions and clicks. The goal is to capture how many people saw the fake door and how man y_opened_ it. Set up whatever funnel tracking is needed (e.g., measure impressions, clicks, and perhaps conversions like email submits).

You do not need to build any actual functionality behind the scenes - just these minimal pieces. Keep the effort low. For instance, if testing a fake feature in-app, you don’t need to build any backend logic for the feature itself, only the front-end hook and a dummy page. If testing via landing page, you don’t need a full website or product, just a one-pager and perhaps a simple form.

The aim is to learn, not launch. Build only what is absolutely necessary to advertise your product to real users while faking the rest. This way you fake it without expensive integrations.

4. Run the experiment and collect data

Go live with your fake door and let it run for long enough to gather a meaningful sample. If it’s an in-product test, deploy the new button/link in an update or via feature flag and let your user base encounter it naturally. If it’s an external test, start your ad campaigns or share the landing page link through your channels. Be patient and avoid jumping to conclusions too early. You’ll want to accumulate sufficient impressions and clicks to observe a reliable pattern. For example, the Real Startup Book suggests waiting until you have on the order of 1,000+ views or 100+ clicks (or whatever number makes sense for your traffic levels) before analyzing results. During this period, monitor the data:

  • What is the view-to-click conversion rate (CTR)? (If 10,000 users saw the button and 500 clicked, that’s a 5% CTR.)
  • If you have a follow-up action (like an email signup on the “coming soon” page), what percentage of clickers take that next step?
  • How do different variations perform if you tested multiple messages or designs?
  • Keep notes of any qualitative feedback too. Occasionally, users might reach out (“I clicked that feature and nothing happened!”), which can provide anecdotal insights or at least a measure of their enthusiasm/confusion.

Ensure the experiment runs long enough to account for typical usage patterns. For an internal test, that might mean a full week or two to cover both weekday and weekend users. For ads, it might mean until you’ve spent a predetermined budget or hit a certain number of impressions.

5. Analyze the results and interpret them against your hypothesis

Once you’ve gathered enough data, it’s time to decide what it means. Compare the outcomes to the success criteria you set in Step 1. For example, was the click-through rate above or below your threshold? If you expected 5% and got 8%, that’s a strong positive signal. If you got 0.5%, that’s a negative signal.

When interpreting results, consider:

  • Conversion rates: A higher CTR or signup rate indicates stronger interest. For instance, if thousands saw the offer but only a handful clicked, it likely means the feature didn’t resonate (or the presentation was unclear). On the other hand, a high percentage of clicks means users are actively showing interest in that promise.
  • Drop-off after click: If a lot of people click the fake door but don’t take any further action (e.g. they reach the landing page but won’t enter an email), that might indicate mild curiosity but not enough genuine desire to commit. It could also signal disappointment - perhaps the messaging after the click wasn’t handled well. A small drop-off (many clickers do leave contact info) indicates high intent: people are not only curious but really want the feature enough to stay engaged despite it not existing yet.
  • Segment differences: If you ran multiple versions (different copy, or different audiences via ads), compare which variant had the best response. You may learn which value proposition or which user segment is most interested.
  • Qualitative impressions: Did you get any feedback or see any chatter (on social media, support emails, etc.)? Sometimes users will express excitement or frustration, which can contextualize the numbers.

Depending on the results, you have a few options on what to do next:

1) If the test met or exceeded your success criteria

Great news - this is a signal your idea has real potential. Users have basically said “yes, I want this!” with their clicks. In this case, you’ll likely decide to move forward and build the feature/product (or at least a prototype of it). In other words, proceed with development. You’ve gained confidence that there is demand to justify the effort.

2) If the overall response was underwhelming

There are two sub-cases here:

  • It could be that the idea might still have merit, but your execution of the test wasn’t quite right. Perhaps the messaging or placement was off. Users might have failed to understand the offer, or maybe they didn’t notice the button. In this scenario, you might pivot the experiment and try again. For example, try a different copy on the button (“Get 50% off for early access” instead of “Coming soon”), or place it more prominently, or target a different audience with your ads. Tweaking the presentation can sometimes yield a very different outcome. It’s often worth iterating once or twice on a negative result to ensure it’s truly the concept that’s flawed and not just the way it was advertised.
  • If you’ve given the test a fair shot (or multiple tries) and still see lackluster interest well below your threshold, then the outcome is actually positive in a different sense: you just invalidated a bad idea and saved your team a lot of time and money. In this case, you likely decide not to pursue building that feature/product. As the Real Startup Book notes, “No success! Congratulations. You prevented yourself from wasting scarce resources.” In other words, a failed fake door test can be a success because it steered you away from a feature that users don’t actually want.

Document your findings. It’s good practice to record the hypothesis, the data (CTR, etc.), and your decision (go/no-go) in an experiment log or learning sheet. This ensures the team shares the learning and has a paper trail of why you decided to proceed or not.(Learning Loop provides an “Experiment Sheet” and “Learning Sheet” for this purpose, helping you capture the hypothesis, results, insights, and next steps from every experiment.) 1.

Examples of Fake Door Tests in action

Fake door experiments have been used by many companies, from scrappy startups to major brands. Here are a few real examples and case studies that illustrate how fake door testing works:

  • Polyvore’s “Outfit Sales” Feature: Online fashion company Polyvore wanted to see if users would be interested in buying entire outfits (not just individual clothing items) and if offering bigger discounts on bundled outfits would increase sales. Instead of building a full outfit-shopping feature with brand partnerships, they tested the concept with a fake door. They faked the existence of a new “outfit sales” feature - likely by showcasing an outfit for sale - and when users showed interest, the team manually handled the orders behind the scenes (they impersonated the supplier and personally took care of payment and shipping). This experiment validated both whether people were interested in shopping for outfits and how pricing discounts might affect behavior. By faking the brand integration, Polyvore got to learn about user demand without building any complex e-commerce infrastructure upfront.
  • Tesla’s Pre-Order Deposit for the Roadster: When Tesla was preparing to launch its first car (the original Roadster), they used a form of fake door test to gauge demand before production started. They announced the car and asked interested customers to put down a $5,000 deposit to secure a build date - essentially a pre-order for a product not yet built. This is a high-commitment fake door (sometimes called a Dry Wallet or pre-sales test) where the user’s click isn’t just “I’m interested” but “I’m willing to put money on it.” The high number of pre-orders validated that there was serious demand and willingness to pay for the vehicle, long before any cars rolled off the line. (The traditional approach would have been to build the car first and hope people buy, but Tesla’s experiment proved demand first.)
  • Zynga’s New Game Pitches: Zynga, the game studio behind hits like FarmVille, is known for testing new game ideas with fake door methods. One technique they reportedly use is to come up with a short pitch (a few-word description) of a potential new game and display it as an in-game advertisement or link in their existing games. If users click the promo, it indicates interest in that game concept, even though the game doesn’t exist yet. Zynga would run these fake promotions for a limited time to see which concepts get a lot of clicks. This helps them decide which idea to actually develop into a real game. By measuring click-through rates on a “dummy” game pitch, they gather data on player preferences at virtually no cost.

(These are just a few examples - many companies have done similar tests. Another example: a Dutch startup, Tippiqlabs, created a separate site where every new product idea gets its own landing page and ad campaign to see if people would sign up. After a visitor clicks and expresses interest, they’re told the product is still in development and asked for an email to stay updated. This systematic use of fake door tests for every idea became part of their product development process.)

Measuring success: interpreting conversion rates and drop-offs

When evaluating a fake door test, two key metrics to look at are conversion rates and drop-offs:

  • View-to-Click Conversion Rate (CTR): This is the percentage of users who saw the fake door and actually clicked it. It’s a direct measure of initial interest. For example, if 5 out of 100 visitors clicked the “Learn More” button for your fake feature, the CTR is 5%. A higher CTR means more interest. What counts as “good” depends on context, but you’ll have defined a target in your hypothesis. Even a few percent can be significant if it’s a high-commitment click. Compare the CTR against your expected threshold. If it exceeds it, that’s a positive signal; if it’s far below, that’s a red flag. Also compare CTR between any variants you tested (different wording or audiences) to see which was most compelling.
  • Post-Click Drop-off: This looks at what happens after the user clicks the fake door. Since after the click they encounter a dead-end (of sorts), some drop-off is natural - not everyone will, say, submit their email on a “coming soon” page. However, analyze how steep this drop-off is:
  • If almost no one who clicks proceeds to the next step (e.g., 500 clicks but only 5 emails captured), it might indicate that users clicked out of curiosity but were then disappointed or deterred (perhaps by the revelation that the feature isn’t available). This could highlight an issue: maybe the messaging after click needs to be friendlier or offer some incentive (“Sign up to get an early discount when we launch!”) to engage users. Or it could mean the interest was shallow - people would click a button, but not enough to actually sign up for a waitlist.
  • If many clickers do complete a follow-up action (like joining a waitlist or filling a survey), that indicates strong interest and perhaps even mild frustration that the feature isn’t there yet (which can be a good thing if you plan to build it!). Those users are essentially saying “Yes, I want this - please hurry up.” This scenario validates both interest and a willingness to engage further.
  • If your fake door didn’t include any additional call-to-action beyond the initial click, you might gauge drop-off in terms of time on page or exit rate. For instance, if users click and then spend a few seconds on the “coming soon” page before leaving, that’s expected; if they bounce immediately, it might mean they felt tricked. Including a gentle explanation or an apology on the page can mitigate negative reaction (“Oops, we’re not ready yet, but thanks for your interest!”).

By looking at both CTR and drop-off (and any other funnel steps), you get a fuller picture of the user’s journey through your fake door funnel. This helps you interpret why the experiment succeeded or failed. Always cross-check these numbers with your predetermined success criteria.

Additionally, keep in mind any external factors that might skew results. For example, if an internal fake feature button was placed in a less-trafficked area of your app, a low CTR might not mean the idea is bad, just that not enough people saw it - which is why testing placement is important. Or if an external test’s ad ran to a very broad audience, a low CTR might mean the message didn’t reach the right people - segmenting the audience could help. These nuances underscore why sometimes a second iteration of the test is worthwhile if the first result is inconclusive.

In summary, interpreting a fake door test isn’t just looking at one number - it’s understanding the conversion funnel and user behavior around your fake offer. The ultimate question is: Did enough people indicate “Yes, I want this” to justify moving forward? If yes, you likely have a green light; if no, analyze whether it’s a definitive “no” or if something could be adjusted in the test.

Advantages and Disadvantages of Fake Door Testing

Like any technique, fake door experiments come with pros and cons that you should weigh before using this method:

Advantages

  • Rapid Validation: Fake door tests allow you to validate (or invalidate) an idea extremely quickly. Instead of building a fully functional MVP over weeks or months, you can test the core interest in days. This speed means you can iterate faster and test multiple ideas in the time it normally takes to build one.
  • Low Cost, Low Effort: The investment required is minimal - often just adding a button or putting up a single landing page. You’re saving engineering and design resources by only creating the illusion of a feature. If the test fails, the cost of that failure is very small (basically the few hours or small ad budget spent on the test) compared to the cost of building an unused feature. This makes fake doors a very efficient filtering mechanism for ideas.
  • User-Centric Evidence: The data you get (clicks, sign-ups) is based on actual user behavior. This is a huge advantage for decision-making. It grounds product decisions in evidence (“100 out of 2,000 users clicked this means 5% showed interest”) rather than internal hunches or unreliable survey statements. It’s quantitative and can often be convincing evidence for stakeholders. As noted, this behavior-driven insight can validate or invalidate earlier user research: you might have heard in interviews that users say they want a feature - a fake door test will show if they act on that desire or not.
  • No Impact on Functionality: Because you’re not delivering a real feature, you don’t risk introducing bugs or performance issues to your product. The tested users don’t actually get new functionality (which is both a pro and a con). But on the pro side, if the idea was bad, none of your users had a broken or poor feature experience - they only encountered a “not available yet” message at worst.
  • Scalable and Repeatable: Fake door tests can be repeated for many ideas, and even run in parallel (as long as you can manage the analysis). They’re a core part of a growth hacking or experimental culture. Some companies run dozens of such tests a month to optimize everything from new features to pricing. The method is flexible enough to test big concepts (completely new product lines) or small additions (a single feature or even a single UI element change).

Disadvantages

  • Potential User Frustration: The biggest risk is annoying or deceiving your users. By its nature, a fake door involves a bit of a gotcha: the user clicked expecting something and then finds out it’s not actually available. This can erode trust if not handled carefully. Users might feel tricked or disappointed - “Why did you advertise something that isn’t real?” If done excessively, fake doors can give an impression of a “lousy product” or a company that over-promises. It’s important to mitigate this by being transparent as soon as they click (e.g., “Thanks for your interest, we’re gauging demand for this feature.”) and not making false promises beyond that. In short, overusing fake doors or executing them poorly can piss off your users and damage goodwill. Use them sparingly and ethically.
  • No Actual Value Delivered: A fake door test only tells you about interest - it does not prove that you can deliver the value, nor that users would be satisfied if you did. It’s a first-step validation of desirability (do people want this) but it doesn’t validate feasibility or usability. For example, people might click to try a one-hour delivery service, but if you can’t actually build it reliably, that interest alone doesn’t guarantee success. Or users might love the idea in concept (click), but the devil is in the details of execution which you haven’t tested yet. So, treat fake door results as necessary but not sufficient proof - the concept is worth pursuing further, but you’ll need additional testing (prototypes, usability tests, beta releases, etc.) to fully validate it.
  • False Negatives or Positives from Execution Bias: The data from a fake door test can be skewed by factors unrelated to the idea’s true value. For instance, if the fake door wasn’t noticeable or the wording was confusing, a great idea might appear to have no interest (false negative). Conversely, click curiosity could make a mediocre idea look tempting if phrased cleverly (false positive). You have to be careful designing the test and possibly run multiple variations to ensure you’re truly measuring interest in the concept, not just reactions to a particular phrasing or placement. In other words, fake door tests are sensitive to copy and design biases - the microdetails of how the door is presented can strongly influence results. Always account for this in analysis and consider follow-up experiments to validate the signal.
  • Not Suitable for All Questions: Fake doors are not appropriate for testing everything. For example, they are not good for evaluating the actual experience of using a feature (since the feature doesn’t exist yet), nor for testing things like onboarding flows or complex interactions. They also shouldn’t be used for core functionality that users expect to work - you wouldn’t fake your login or payment process, for instance. There’s a limit to what you can learn; fake door tests answer “would they start using it?” but not “will they keep using it, do they love it, does it solve their problem effectively?” Those aspects require functional prototypes or beta releases to evaluate.
  • Ethical and Brand Considerations: In some cases, faking an offering could cross ethical lines, especially if you’re taking money under false pretenses (e.g., taking pre-orders and then refunding - which must be handled with transparency) or if the feature touches sensitive areas (like faking a security feature or healthcare feature could be seen as irresponsible). Always consider the communication and perception. A rule of thumb: be honest that it’s a test as soon as the user would potentially feel misled. For example, some teams explicitly mention on the “coming soon” page that “We’re exploring this feature and it’s not available yet”, to be transparent. Sacrificing a tiny bit of experimental purity for the sake of honesty can maintain user trust while still getting the data you need.

Fake door tests have tremendous upside in speed and learning, but they must be used thoughtfully to avoid alienating users. When done right - with transparency and restraint - the pros (quick validation, saving effort, learning what users actually want) usually outweigh the cons.

Fake Door vs. Other Product Experiment Techniques

Fake door testing is one tool in the product validation toolkit. It’s useful to understand how it compares to and differs from other lean experiment methods commonly used by product managers and UX researchers. Here’s a quick comparison with a few related techniques:

Fake Door Tests vs. Smoke Tests (Landing Page Tests)

The term “smoke test” in product development often refers to any experiment that tests user interest in an idea by showing a front-facing teaser and seeing if people bite (the analogy being “where there’s smoke, there’s fire”). In that sense, fake door tests are a type of smoke test. For example, a landing page advertising a product that doesn’t exist yet, measuring sign-ups, is a classic smoke test - and it is essentially a fake door. The two terms are sometimes used interchangeably. The key idea is testing the market’s reaction without having the actual product. Besides fake doors, other forms of smoke tests include things like running a video ad or demo (e.g., a concept video) to gauge interest, or putting up a crowdfunding campaign to see if people will put money down.

In practice: if someone says “we ran a smoke test for our idea,” it often means they did some form of fake offering - possibly a fake door. The landing page method is so common that fake door tests are also simply called landing page tests. All these are trying to answer “Do people want this enough to take some action?” early on. Fake door is just a more vivid term when the test is embedded as a door in an existing product (and it underscores that the door is fake/one-way).

One nuance: A generic smoke test could involve more elaborate setups (like a video or a prototype that isn’t fully functional) whereas a fake door test is usually straightforward (button or page). But fundamentally, they serve the same purpose of demand validation. So, think of a fake door test as one flavor of a smoke test. It’s especially handy when you can easily add a fake entry point in an existing interface or quickly spin up a web page.

Fake Door Tests vs. Concierge Tests

A Concierge test (Concierge MVP) takes the opposite approach of a fake door in terms of building vs. delivering. In a concierge MVP, you do not fake the offer - you actually deliver the value - but you do it manually and on a very small scale. The idea is to personally hand-hold a few customers through the service or solution to see if it truly solves their problem and if they’re happy with it, before automating or building any technology.

For example, say you’re testing a new food delivery concept. A concierge test would be to manually take a few customers’ orders by phone, go buy the food yourself, and deliver it to them - acting as the “concierge.” The user knows (or wouldn’t be surprised to learn) it’s a manual, bespoke service at first. The goal is to learn about the customer experience and validate that the solution actually provides value, without writing code.

Key differences from fake door:

  • In a concierge test, users do get to use/experience the service, albeit in a manual way. In a fake door, users don’t get the service at all (just a promise).
  • Concierge MVPs are transparent about being manual. In fact, being upfront that you’re delivering the service manually is recommended. Fake door tests, in contrast, involve a bit of deception up until the point you reveal the feature is not available.
  • Fake doors primarily test desirability/interest (“will they click?”). Concierge tests can test desirability plus usability plus value on a small scale, because you’re actually doing the thing and seeing if customers are satisfied enough to stick with it.
  • Concierge tests are much more time and effort intensive per user (since you’re doing everything manually), so they don’t scale for large numbers or broad interest testing. They are best for depth of learning with a handful of users. Fake doors are best for breadth of learning (quantitative interest) but zero depth.

Sometimes these methods complement each other. You might use a fake door to broadly validate that lots of people say they want a solution, and then use a concierge approach with a few early adopters to see how to deliver it well and what pitfalls there are before coding anything. Both approaches avoid building full product upfront, but one fakes it and one manually makes it.

Fake Door Tests vs. Wizard of Oz Testing

A Wizard of Oz experiment is somewhat a middle ground between fake door and concierge. In a Wizard of Oz test, you do have a user-facing interface and the user thinks they are using a real system, but behind the curtain (like the wizard behind the curtain in The Wizard of Oz story) humans are manually powering some or all of the functionality. In other words, the user’s experience is intended to be seamless and appear fully working, but you have not actually built the technology - you’re manually simulating it.

For example, you might have a “smart chatbot” feature in an app. Instead of building an AI, you set up a chat interface and when the user asks a question, a human quickly types a response that seems bot-like. The user believes the feature works (and in fact they get the value - they got an answer), but you’ve Wizard-of-Oz’d the backend.

How this differs from a fake door:

  • In a fake door, the user cannot use the feature at all - they are stopped at the door. In a Wizard of Oz, the user can go through and actually gets the illusion of a working feature.
  • Wizard of Oz tests hide the manual effort from the user - it’s crucial the user does not realize people are doing the work, or else the test might bias their behavior. You maintain the facade of automation. With fake doors, the facade drops immediately upon click (“this is not real yet”).
  • A Wizard of Oz experiment allows you to observe users interacting with what they think is a fully functional product. This lets you test usability and value: Are they using it as expected? Are they happy with the outcomes? It’s great for seeing if the concept actually delivers when in use, and what operational challenges arise. Fake door can’t tell you that, because no interaction beyond a click happens.
  • Wizard of Oz tests typically require more setup than a fake door. You need at least a rough front-end or prototype that users can interact with, and a plan to have humans in the loop for whatever needs to be faked. It’s often used for things that are technically challenging - e.g., testing a new algorithm-driven service by doing it manually first to see if results are good, without coding the algorithm.

Wizard of Oz = the user uses a fake “automated” system that is secretly human-. Concierge = the user gets a personal, hand-held manual service (and they know it). Fake Door = the user only sees the invitation, but there’s nothing beyond the door. All three are techniques to avoid full development initially, but they serve different learning goals and user experiences. It’s worth noting that Wizard of Oz and Concierge are typically done after an interest has been established - for instance, you might first use a fake door or market research to confirm people want something, then use Wizard of Oz to learn how to deliver it well. They are complementary in an experimentation roadmap.

A couple of other experiment types related to fake doors are:

  • Dry Wallet / Pre-sales Tests: This involves asking for a payment or reservation before the product is built - like Tesla’s deposit example. It’s essentially a fake door with an even higher bar (willingness to pay). Kickstarter or crowdfunding campaigns also fall here: customers commit money for a product that’s not built yet. These tests gauge not just interest, but actual purchase intent. They can be very powerful signals, but you must handle the ethics (refunds if you don’t deliver, etc.) carefully.
  • Impersonation or Imposter Screens: In some cases, teams have impersonated a partner or feature to test interest. For instance, showing a option like “Pay with PayPal” before you’ve actually integrated PayPal, just to see if users would click it - similar to fake door. (This is essentially an internal fake door to test integration priority.)
  • Ad Campaign Tests: Even without a landing page, sometimes just running ads for various concepts and measuring click-through can be a lightweight fake door. Users click an ad and maybe you just don’t have a product page yet (or you take them to a generic “coming soon” signup). This is a quick way to test which tagline or product pitch resonates more with audiences (though it provides less insight than a full funnel).

All these methods, including fake door testing, fall under the umbrella of lean validation techniques. They help you answer different questions:

  • Fake door: Will they start to use it (do they care at all)?
  • Wizard of Oz: Will the experience deliver value - can we actually fulfill the promise in a way that users appreciate?
  • Concierge: What do users really do with this solution when given personal attention - and what does that teach us about what to build?
  • Pre-sales: Will they pay for it (and how much)?
  • Etc.

By combining these approaches appropriately, you can de-risk a product idea significantly before writing a ton of code or investing heavily. Fake door is often one of the earliest tests to run, because it’s easy and tests the crucial first assumption of desirability (there’s no point building something people don’t want). After a successful fake door test, you might move on to prototypes, Wizard of Oz trials, or concierge style engagements to validate feasibility and viability.

Next Steps

A fake door test is a powerful technique for product managers, growth hackers, UX designers, founders - anyone looking to validate an idea quickly and cheaply. It provides a reality check on user interest early in the product development process. By following the steps outlined - defining a clear hypothesis, implementing a believable teaser, measuring rigorously, and handling users ethically - you can gain invaluable insights with minimal cost. You’ll either uncover a potential winner to pursue or save yourself from a likely flop, all in a matter of days.

Keep in mind that fake door testing is just one of many experimentation plays you can run. It primarily answers “Do they want it?” Once you have that answer, there will be further questions like “Will they keep using it?”, “Can we deliver it effectively?”, “What’s the best way to implement it?” Those can be addressed with other methods (prototypes, beta releases, usability tests, etc.). The journey of product validation often involves multiple experiments - from problem discovery to solution validation.

If you found the fake door approach useful, you may want to explore other validation techniques. In fact, Learning Loop’s Validation Patterns card deck is a great resource, featuring 60 product experiments (including fake door testing and many others) that help teams validate ideas in days, not months. These techniques are used by product builders at top companies like Google, Facebook, and Amazon. You can use them as inspiration to design the right test for whatever assumption you need to validate next. Consider grabbing the card deck or checking out the Learning Loop playbook library for guidance on experiments like Concierge tests, Wizard of Oz, Dry Wallet, and more.

Ultimately, the goal is to build product validation into your process. By continuously testing assumptions - with fake doors and beyond - you’ll build evidence-driven confidence in your product decisions. So the next time you have a bold feature idea or a new product concept, don’t just gamble on it - fake it till you make it (ethically, of course) and let the data speak. Your users’ actions will tell you if you’re on the right track long before you’ve poured resources into development. Happy experimenting!

Explore Further: To dive deeper into structured product experimentation, check out the Validation Patterns deck on Learning Loopor visit shop.learningloop.io for tools and resources to help you run lean experiments. Each experiment pattern comes with field tips and examples to maximize your learning. Start testing your ideas and iterate your way to product-market fit!

Fake door testing examples

Polyvore outfit sales

When the online store Polyvore tested their “outfit sales” feature, their most uncertain assumptions were if people were interested in shopping for outfits and whether customers would buy more if they got a bigger discount. They faked the clothing brand and the product team handled payment and shipping themselves.

Source: Polyvore outfit sales

Tesla build date

When releasing its first car, Tesla deployed a Fake Door experiment to validate demand. To validate Willingness to Pay before production had even begun, they asked customers to put down a $5,000 deposit to secure a build date. The traditional way would have been to start selling it once it was out.

Source: Pretotyping @ Work

Zynga new-game teasers

Game studio Zynga vets new game ideas by writing a five-word pitch for each concept and inserting it as a promo link inside live games. Click-through rates reveal which concepts excite players; only the winners get a development budget.

Source: GLIDR Help Center – Fake Door Smoke Test (Zynga case)

Buffer early landing page

Buffer’s founders launched with a single landing page describing the product and a “Plans & Pricing” button. Visitors who clicked discovered the tool was still in development and were invited to leave an email. The signup rate convinced the team to build the real product.

Source: Buffer Blog – Idea to Paying Customers in 7 Weeks

Dropbox explainer video

Before coding a file-sync platform, Dropbox published a short explainer video on a placeholder page. More than 70 000 people joined the waitlist overnight, giving clear evidence of demand long before a working prototype existed.

Source: RST Software – Fake Door MVP: What It Is and Should You Use It

This experiment is part of the Validation Patterns printed card deck

A collection of 60 product experiments that will validate your idea in a matter of days, not months. They are regularly used by product builders at companies like Google, Facebook, Dropbox, and Amazon.

Get your deck!