Idea Validation: Product

Technical Spike

Explore technical solutions for feasibility

Illustration of Technical Spike
Run a Technical Spike play

How: Create spike solutions to resolve tough technical questions before implementation. Spikes focus solely on the issue at hand and are usually discarded or re-created afterwards.

Why: Avoid getting stuck. Ensure only execution remains by not just imagining but coding to test complex ideas. Identifying issues upfront turns the rest into a matter of execution.

In the early stages of product discovery, it’s common to encounter uncertainty—not just about what users want, but whether it’s technically feasible to deliver a viable solution. This is where “Technical Spikes” become a valuable tool. Technical Spikes are short, focused, time-boxed explorations of technical challenges designed to reduce risk and improve decision-making. They help product teams explore unknowns, validate implementation strategies, and uncover limitations before committing to a full build.

Originating from Extreme Programming (XP), the term was coined by Kent Beck to describe the idea of “driving a spike through the entire design”—a thin, vertical slice that tests technical depth and feasibility (source). While spikes don’t produce shippable features, they are critical for laying a confident foundation for subsequent delivery work. Ryan Singer’s concept in Shape Up of “getting one piece done” shares a similar philosophy—prioritize hands-on exploration early, and use that work to sharpen scoping, reduce surprises, and drive alignment.

Why run a technical spike

Technical Spikes exist to address questions that can’t be answered with a whiteboard session or a theoretical discussion. If the team is unsure whether an external API can scale, whether a data migration will be performant, or whether an animation can meet accessibility requirements—those are perfect candidates for a spike. Instead of guessing or over-engineering, the spike is a safe container to test, learn, and adapt.

They are not a replacement for feature work. Instead, they create the conditions for better features—through informed trade-offs, realistic estimates, and mitigated technical debt. Spikes are especially valuable in complex systems, integrations, or when adopting new tools or frameworks. Even seasoned engineers benefit from the space to experiment before committing to implementation.

Spikes are fundamentally a type of lean experimentation. Their primary goal is to create fast learning cycles that allow the team to reduce uncertainty without committing significant resources. Unlike production work, spikes are exploratory in nature and should be evaluated based on the clarity they provide—not on the code produced.

What a Spike is (and isn’t)

A Technical Spike is not an excuse to code without direction. It is a deliberate form of technical discovery. It’s time-boxed, narrow in scope, and focused on answering a specific question. The output is not production code—it’s knowledge. That might include a throwaway prototype, a list of constraints, benchmark results, or an informed recommendation.

Importantly, spikes are not filler work. Nor are they a catch-all for unscheduled engineering. They are an intentional, scheduled practice designed to isolate learning. Spike code should always be isolated from production environments. The goal is to learn without polluting the main codebase or creating technical debt.

The common thread across all spikes is that they aim to narrow uncertainty through practical action and focused inquiry. If a spike ends with clarity and alignment, it has succeeded—even if the answer is “this won’t work.”

Types of spikes and use cases

Spikes can take many forms depending on the problem space:

  • Technical spikes validate feasibility of architecture, integration, or performance.
  • Design/UX spikes test the practicality of interface or interaction concepts.
  • Research spikes evaluate different approaches or third-party services.
  • Performance spikes benchmark resource usage under specific loads.
  • Integration spikes map and test communication between multiple systems.

Some spikes might also include exploratory design tasks to surface user interface constraints or collaboration patterns. Agilemania categorizes spikes further into architectural, usability, data, and experimentation types (source)—demonstrating how spikes apply across the product and technical landscape.

How to structure and execute a Technical Spike

To run an effective spike, begin by clearly defining the learning goal. What’s the one question you want answered? The narrower the focus, the more likely you’ll get usable insights.

Once the goal is set, time-box the effort. Common durations range from half a day to a few days. Timeboxing ensures that the spike doesn’t balloon into informal feature development. Agreeing upfront on time constraints prevents scope creep and helps the team stay within its learning lane.

Choose the format that suits your goal—this might mean writing a script to call an API, prototyping a component, or mocking a data pipeline. Avoid general coding or overengineering. Spikes are not about writing reusable code—they are about learning fast.

Keep spike code separated from production code to protect long-term quality. Use branches, throwaway repos, or even local-only experiments. Treat the output as temporary by default.

Throughout the spike, document what you’re testing, what assumptions you’re challenging, and what you’ve learned. Many teams use spike templates or report formats to structure this knowledge. Share findings with the team, and if appropriate, turn them into backlog items or architectural decisions.

Some teams formalize spikes as backlog items with acceptance criteria like “we validated that integration latency is under 250ms” or “we identified 3 edge cases for multi-tenancy.” In Shape Up-inspired teams, this aligns with early shaping: test constraints before committing to delivery.

Organizations like Codebots institutionalize this practice further by reserving 10% of engineering capacity per iteration for spike work. Using a risk matrix, they identify areas where spikes will have the greatest impact, giving teams a way to learn systematically.

Spikes as lean product experiments

At their core, spikes are lean experiments. Like other lean validation tools, their value lies in the speed and quality of feedback they generate. By narrowing a question and testing it in isolation, teams reduce the cost of learning. And like other lean techniques, spikes should be paired with methods that test desirability or usability. For example, a technical spike validating integration with a third-party service might be paired with a customer interview or usability test to confirm the value of that service.

Spikes aren’t just technical insurance. They’re learning loops. And when aligned with product hypotheses, they serve as a feedback mechanism for opportunity discovery as well.

What to do after a Spike

When a spike ends, the value lies in the insights gathered. Share those findings in a concise format. Teams often use spike reports, Loom videos, or even short demos. Clarify what the spike proved or disproved. Does it support moving forward with a proposed architecture? Has it changed your estimate or approach? Is more exploration needed?

Use the spike outcomes to inform the scope of the follow-up implementation. Many delivery challenges are symptoms of underexplored technical constraints. Spikes help you spot those challenges early and adjust course accordingly.

In some cases, the result of a spike is “we shouldn’t build this.” That’s a win. Avoiding expensive wrong turns is just as valuable as building the right thing.

Combining spikes with other discovery work

Technical Spikes don’t live in a vacuum. They are most powerful when paired with other discovery activities. For instance:

You can also use spikes to “de-risk” a shaped pitch before the betting table, or to vet assumptions before a delivery cycle. In continuous discovery, spikes help keep the solution space grounded in reality.

Make spikes work for you

Technical Spikes are a lean, effective way to reduce risk and build momentum in product discovery. They represent an investment in learning that pays dividends across planning, design, and delivery. Instead of deferring technical questions until it’s too late, spikes bring them to the forefront—creating faster feedback loops, tighter alignment, and more resilient solutions.

In a world where uncertainty is the norm, Technical Spikes offer something rare: confidence. Not because they guarantee success, but because they make failure safer, cheaper, and more instructive. That’s the kind of failure that moves teams forward.

And in lean product discovery, that’s what it’s all about.

Technical Spike examples

iPhone multitouch display

In 2003, Apple’s VP of Design, Jony Ive, gathered his team for their bi-weekly brainstorm where Duncan Kerr showed a technical prototype of what was possible with multi touch displays. Experiments first moved towards an iPad-like design, but morphed into the iPhone being released 4 years later over a series of technical spikes.

Source: cultofmac.com

API integration feasibility

A software development team faced uncertainty integrating their application with a third-party API. They allocated a week-long technical spike to explore the API, understand its functionality, and test the integration process. This spike provided clarity on potential hurdles and informed their implementation plan.

Source: thinkproductgroup.com

Evaluating cross-platform development tools

A mobile app development team considered adopting a new cross-platform tool to enhance productivity. Unsure of its compatibility with their existing tech stack, they conducted a spike to evaluate the tool’s integration capabilities and impact on workflow, leading to an informed adoption decision.

Source: thinkproductgroup.com

Checkout process optimization

An e-commerce company aimed to redesign their checkout process to reduce cart abandonment. They hypothesized that a one-page checkout would improve user experience. Through a spike involving prototyping and user testing, they validated this assumption and optimized the checkout flow accordingly.

Source: thinkproductgroup.com

Microsoft's use of spikes in Agile development

A team at Microsoft frequently leverages spikes to tackle complex challenges. These focused, timeboxed investigations resolve technical issues or clarify user stories, enabling more accurate estimates and better designs, thereby maximizing the efficiency of their Agile methodology.

Source: hellobonsai.com

This experiment is part of the Validation Patterns printed card deck

A collection of 60 product experiments that will validate your idea in a matter of days, not months. They are regularly used by product builders at companies like Google, Facebook, Dropbox, and Amazon.

Get your deck!

Related plays

Sources

Want to learn more?

Receive a hand picked list of the best reads on building products that matter every week. Curated by Anders Toxboe. Published every Tuesday.

No spam! Unsubscribe with a single click at any time.

Community events
Product Loop

Product Loop provides an opportunity for Product professionals and their peers to exchange ideas and experiences about Product Design, Development and Management, Business Modelling, Metrics, User Experience and all the other things that get us excited.

Join our community

Made with in Copenhagen, Denmark

Want to learn more about about good product development, then browse our product playbooks.