Idea Validation: Market demand, Product

User Test Competing Products

Learn from the UX of others

Illustration of User Test Competing Products
Run a User Test Competing Products play

Also called: Comparative Usability Testing

Difficulty: Intermediate

Evidence strength
15

Relevant metrics: Feature preferences, Satisfaction rank, Market gap

Validates: Usability, Desirability

How: Conduct user tests on similar products in the market to understand their strengths and weaknesses. Analyze feedback to improve your product. This method offers insights into your product's market position and opportunities for enhancement.

Why: Benchmark your product against market standards to gain insights into your market position and opportunities for strategic improvement.

This product discovery method is part of the Discovery Patterns printed card deck

A collection of clever product discovery methods that help you get to the bottom of customer needs and coining the right problem before building solutions. They are regularly used by product builders at companies like Google, Facebook, Dropbox, and Amazon.

Get your deck!

One of the most direct ways to uncover design opportunities, usability gaps, and customer expectations is to watch users interact with products already in the market—including those you didn’t build. Whether those products are direct competitors, adjacent tools, or alternative solutions, user testing competing products allows product teams to understand how real users navigate familiar tools, what frustrates them, and what workarounds they rely on. This kind of research surfaces implicit expectations, common industry patterns, and often-missed points of differentiation.

Learn from the market, not just your users

While competitive analysis has long been part of the product strategy toolkit, it’s traditionally focused on feature matrices and business benchmarks. What’s often missing is the user’s perspective—how they actually experience and evaluate those competing products in real scenarios. That’s where user testing those alternatives, in real time with target customers, becomes so valuable.

Why test competitor products with real users?

It’s easy to assume that customers are loyal to your product’s uniqueness or differentiated value. But in practice, users don’t enter your experience in a vacuum—their mental models have been shaped by years of using other tools. Those tools set expectations for how certain tasks should be performed, what language is used, and what intuitive means. If you build without understanding that baseline, you risk building against the grain of user expectations.

Dave Bailey articulates this well: “If your users are going to compare you to competitors anyway, you might as well get there first.” The purpose of testing competitors isn’t to copy them—it’s to understand the strengths and weaknesses that real users perceive, and to discover how your own design can exceed those expectations or offer a meaningful alternative.

What user testing competing products looks like

The setup is simple in concept: recruit participants from your target audience, give them a set of realistic tasks, and observe how they attempt those tasks in competing products. As they navigate, you ask them to think aloud—to share what they’re trying to do, what they expect to happen, and how they’re making sense of the interface.

You don’t need a large sample to gain insights. Testing 3–5 users per product can uncover recurring patterns in confusion, delight, or behavioral workarounds. This method works especially well when comparing:

  • Your prototype or MVP against an incumbent product.
  • Two competing tools you’re trying to position against.
  • Multiple versions of your own product as part of an iteration loop.

Testing your own design variants in the same session as competitors can be particularly insightful. It helps you benchmark potential directions before committing to one.

The key is to let users drive the session, not your assumptions. You’ll often find that the elements teams obsess over in design reviews go unnoticed by users, while minor microcopy or navigation choices derail the task entirely.

What it doesn’t look like

It’s important to note: this is not a way to cherry-pick features or clone competitors. As NN/g warns, blindly borrowing designs without context may result in recreating bad patterns. The goal is to learn from how users interact with competing tools, not from what designers intended.

Testing competitors helps you ground your own product decisions in user evidence, not internal debate. But that evidence must be interpreted within the context of your product strategy, constraints, and users.

Best practices for testing competing products

All sources emphasize the importance of consistency and user relevance. Here are some principles to guide your sessions:

  • Limit to 2–3 products per session to avoid fatigue or bias.
  • Use the same tasks across all tools, framed around goals rather than UI steps.
  • Recruit your actual target users, not just general testers.
  • Randomize product order to control for learning effects.
  • Use neutral language—don’t imply which tool is yours or which is “better.”
  • Let users narrate, then follow up with reflection questions at the end.

Supplement qualitative insights with metrics like:

  • Task success rate
  • Time on task
  • Error rate
  • System Usability Scale (SUS) scores
  • Post-task confidence or satisfaction ratings

Comparative usability testing can be complemented by follow-up research methods such as:

  • Preference testing
  • Post-study surveys
  • Validation interviews

These methods help you confirm the validity of insights and identify whether usability or messaging improvements are needed.

Analyzing the data

When users engage with competing products, they surface insights that help you:

  • See which flows are intuitive and where users get stuck.
  • Understand what users assume buttons and labels will do.
  • Learn which features users rely on and which they ignore.
  • Discover what parts of the experience users tolerate, workaround, or even enjoy.
  • Spot patterns across competitors that signal expected norms or anti-patterns.

In addition to success rates and task efficiency, be sure to observe emotional cues like hesitation, sighs, or delighted exclamations. These moments often carry more weight than the metric data alone.

Prompting users before and after tasks—“What are you expecting to do here?” or “How did that feel?”—can help anchor their behavior in context and reveal misaligned expectations.

Making the most of what you observe

To make this method actionable, you’ll want to synthesize the findings across participants and tools. The goal isn’t to declare a “winner” or best product. Instead, you’re looking for:

  • Design elements that reduce friction or support task clarity.
  • Moments of confusion, hesitation, or repetitive behavior.
  • Common language users use to describe tasks or interface elements.
  • Gaps where user needs are ignored or handled poorly across all tools.
  • Specific competitor choices you want to emulate, improve, or avoid.

Use structured synthesis tools, such as templates for documenting friction points, user emotions, and quotes. These visual aids make it easier to align stakeholders and build shared understanding. Recording sessions and sharing video clips internally can also build empathy and support for research findings.

Presenting your insights as a benchmark

One effective way to present your findings is with a table format that places competitors as columns and features, flows, or customer benefits as rows. This enables side-by-side comparison and supports easier pattern spotting.

Here’s a simplified example of what such a table might look like:

Feature / Benefit Your Product Competitor A Competitor B
Sign-up Process 🟡 Average 🟢 Easy 🔴 Confusing
Task Completion Time 🟢 Fast 🔴 Slow 🟡 Moderate
Navigation Intuitiveness 🟢 Intuitive 🟡 Okay 🔴 Confusing
Terminology Clarity 🟡 Mixed 🔴 Jargon-heavy 🟢 Clear
User Delight Moments 🟢 Several 🔴 Few 🟡 Some

You can enrich this table with user quotes, task success rates, or emotional cues. It works well in debriefs and strategy sessions to summarize findings without needing to replay every video or read every transcript.

What a comparative study typically includes

User testing competing products becomes even more effective when integrated into a broader research plan. Rather than treating it as a standalone study, you can embed it within a multi-method research sprint to surface deeper context and increase the reliability of your findings.

Begin with customer interviews to identify pain points, workarounds, and unmet needs across current tools. These conversations shape the tasks you assign during competitor testing and help ground usability issues in the real-world context of your target audience.

You might also integrate a heuristic evaluation—where UX experts review and score the competitor products. Comparing expert insights with observed user behavior can help uncover mismatches between what “should” work and what actually works in practice.

After testing sessions, you can layer in surveys or preference testing. These methods help validate insights at scale and capture user sentiment or choice rationale. If users clearly prefer one version or design model, understanding why can guide roadmap priorities.

For products with complex workflows, journey mapping can be used to visualize how competing tools support or hinder the broader user journey. This is especially useful for identifying gaps your product could fill. Finally, if navigation problems emerge, card sorting or tree testing can help ensure your own IA avoids the same pitfalls.

Together, these methods provide a multi-angle view—combining observation, explanation, and evaluation—to inform more confident design decisions.

When to test competing products

User testing competitors is most powerful early in the product lifecycle, before you’ve committed to a solution. It’s especially useful when:

  • Entering a new market with established incumbents.
  • Trying to understand why users churn from another tool to yours—or vice versa.
  • Exploring which features actually matter to users in real workflows.
  • Prototyping a replacement or redesign of an existing experience.
  • Trying to validate positioning or pricing against other options.

Even in later stages, this method can serve as a benchmark to understand how your usability evolves over time relative to industry standards.

Learning from the market, not just your users

User testing your own product shows how it performs. But user testing competing products shows what users expect before they meet you—what’s intuitive, what frustrates them, and what they’re already adapting to. When combined, these methods give product teams a more complete picture of what to build, how to design it, and how to differentiate in a way that feels natural to users.

Whether you’re building a challenger product or evolving a mature one, testing the field is a critical part of designing something truly usable—and ultimately, preferable.

Examples

Dell

Dell conducts user tests on competing laptops to gauge user reactions and preferences, guiding their design and feature enhancements to better meet market demand.

Bose

Bose uses user testing of competing audio products to understand user preferences in sound quality and design, informing their product development strategies.

Ryanair

Ryanair collaborated with UserZoom to conduct remote, unmoderated usability testing on their own and competitors’ websites. This approach allowed them to gather extensive qualitative and quantitative data, leading to a redesigned website with improved functionality and user experience.

Source: analysia.com

McDonald's

Prior to launching their UK mobile app, McDonald’s engaged SimpleUsability to perform usability tests comparing their app with competitors like Starbucks. The study identified issues such as poor call-to-action visibility and lack of order customization, which were addressed to enhance the app’s user experience.

Source: analysia.com

SoundCloud

SoundCloud partnered with test IO to conduct continuous usability testing on their mobile app, benchmarking it against competitors. This process uncovered over 150 usability issues, including 11 critical ones, leading to significant improvements in the app’s performance across various devices and regions.

Source: analysia.com

Domino's vs. Pizza Hut

Trymata conducted a comparative usability study between Domino’s and Pizza Hut’s online ordering platforms. By having users perform identical tasks on both websites, they gathered quantitative data on user preferences and identified areas where each platform excelled or needed improvement.

Source: trymata.com

American Airlines

American Airlines utilized remote usability testing to benchmark their digital experience against competitors. This strategy provided insights into user needs and informed enhancements to their digital products.

Source: usertesting.com

This product discovery method is part of the Discovery Patterns printed card deck

A collection of clever product discovery methods that help you get to the bottom of customer needs and coining the right problem before building solutions. They are regularly used by product builders at companies like Google, Facebook, Dropbox, and Amazon.

Get your deck!

Want to learn more?

Receive a hand picked list of the best reads on building products that matter every week. Curated by Anders Toxboe. Published every Tuesday.

No spam! Unsubscribe with a single click at any time.

Ice Breakers

Relieve initial group awkwardness and establish a safe space

Educate

Broaden knowledge or insight regarding the behavior or situation to inform decisions.

Demonstrate

Show practical examples or models of the desired behavior for clear guidance.

Alert

Highlight current actions and their reasons, bringing unconscious habits to awareness.

Train

Develop necessary skills and competencies to enable effective action.

Community events
Product Loop

Product Loop provides an opportunity for Product professionals and their peers to exchange ideas and experiences about Product Design, Development and Management, Business Modelling, Metrics, User Experience and all the other things that get us excited.

Join our community

Made with in Copenhagen, Denmark

Want to learn more about about good product development, then browse our product playbooks.