Experiment Rapidly, Learn Relentlessly — the guided missile approach to value creation
InnovationStrategy15 min read

Experiment Rapidly, Learn Relentlessly: The Guided Missile Approach to Value Creation

By Davide Andrea Picone·Catalyst Business Consulting·September 1, 2024
“A guided missile does not fly in a straight line. It launches in the direction of the target — and then it learns its way there.”

64%

of features never used

Standish 2020

1,000+

A/B tests simultaneously

Booking.com

973×

faster deployments

DORA 2023

2.3×

financial outperformance

McKinsey 2023

A product team reviewing experiment results and customer feedback

The retrospective as a scientific review: what did we hypothesise, what did we observe, what do we do next?

Traditional project management is built on a seductive fiction: that the future is knowable. Fix the scope. Set the budget. Agree the timeline. Then execute. The plan is the product. Deviation is failure.

The problem is that the future is not knowable. Markets shift. Customers surprise you. Competitors move. Technologies emerge. The assumptions baked into the plan at month one are often wrong by month three — and catastrophically wrong by month twelve. Yet the plan rolls on, consuming resources, generating outputs, and delivering something that nobody particularly wanted by the time it arrives.

There is a better model. It does not fix scope, budget, or timeline. It fixes direction and values — and then it learns its way to the destination. It treats every release, every customer conversation, and every failed experiment as data. It adjusts continuously, like a guided missile that does not fly in a straight line but constantly corrects its course based on real-time feedback from the target.

This is not a new idea. It is the operating principle behind the most successful product organisations in the world — Amazon, Spotify, Netflix, Booking.com, Monzo. It is the logic of lean startup, modern agile, and the scientific method applied to business. And it is, for most organisations, the single biggest untapped source of competitive advantage.

The problem with fixed plans

The Standish Group has tracked software project outcomes since 1994. Their CHAOS Report is not cheerful reading. In 2020, only 31% of projects were delivered on time, on budget, and with the originally specified features. 50% were challenged — late, over budget, or missing features. 19% failed outright.

64%

of built features are never or rarely used

The Standish Group's 2020 CHAOS Report found that only 31% of software projects succeed by traditional measures. But more damaging than the delivery failures is what the report found about the features that were delivered: 45% of features in completed projects were never used. Another 19% were rarely used. Nearly two-thirds of what was built was waste.

This is the fundamental failure of fixed-scope planning: it optimises for delivery of the plan rather than delivery of value. Teams build what was specified, not what is needed. By the time the product ships, the market has moved, the customer has changed their mind, or a competitor has already solved the problem differently. The plan was executed perfectly. The outcome was worthless.

“No plan survives first contact with the enemy. In business, the enemy is reality — and reality always wins.”

— Adapted from Helmuth von Moltke the Elder

The alternative is not no plan. It is a different kind of plan — one that is explicit about what is known and unknown, that builds in mechanisms for learning, and that treats the plan itself as a hypothesis to be tested rather than a contract to be executed.

The guided missile model: direction without rigidity

A guided missile does not fly in a straight line. It launches in the direction of the target, then continuously receives feedback — from radar, from sensors, from the environment — and adjusts its trajectory accordingly. It is not aimless. It has a clear target. But the path to that target is determined in real time, not pre-programmed at launch.

This is the model for rapid experimentation. You fix the destination — the customer outcome you are trying to create, the value you are trying to deliver — and you leave the path open. Every sprint, every release, every customer conversation is a course correction. You are always moving towards the target, but you are never locked into a route that the terrain has already made obsolete.

Amazon's working backwards process

Amazon's product development process begins not with a specification but with a press release — written as if the product already exists and has already succeeded. The press release describes the customer problem, the solution, and the benefit in plain language. Everything that follows — the technical design, the roadmap, the resource allocation — is in service of making that press release true. The destination is fixed. The path is discovered. Teams are free to experiment with how they get there, as long as they are moving towards the customer outcome described in the press release.

The guided missile model requires two things that traditional planning resists: genuine clarity about the destination (what customer outcome are we creating?) and genuine openness about the path (we do not know yet how we will get there, and that is fine). Most organisations are weak on both. They have vague destinations — "improve the customer experience", "increase market share" — and rigid paths — "deliver these features by Q3". The combination is lethal.

The build-measure-learn loop: experimentation as a system

Eric Ries's Lean Startup framework gave the experimentation model its most widely adopted structure: the build-measure-learn loop. The logic is simple. You have a hypothesis about what will create value for customers. You build the smallest possible thing that will test that hypothesis. You measure what actually happens. You learn from the gap between what you expected and what occurred. Then you decide: persevere, pivot, or stop.

The key word is smallest. The minimum viable product (MVP) is not a stripped-down version of the full product. It is the minimum experiment needed to test the most important assumption. If your most important assumption is "customers will pay for this", your MVP is a landing page with a payment button, not a working product. If your most important assumption is "customers find this problem painful enough to change their behaviour", your MVP is a conversation, not a prototype.

1,000+

simultaneous A/B tests at Booking.com

Booking.com runs over 1,000 simultaneous A/B tests at any given moment. Every change to the product — every button colour, every copy variation, every new feature — is tested against a control group before it is rolled out. The company's entire product strategy is built on the principle that intuition is a hypothesis, not a fact, and that the only way to know what works is to test it.

The build-measure-learn loop is not just a product development technique. It is a way of thinking about all organisational decisions. Should we enter this market? Run an experiment. Should we change this process? Run an experiment. Should we hire for this role? Run an experiment. The question is always: what is the smallest, fastest, cheapest way to find out if this assumption is true?

Spotify's squad model and the experiment culture

Spotify's famous squad model is built around the principle that small, autonomous teams should be able to experiment independently without waiting for central approval. Each squad owns a specific customer outcome — not a set of features, but an outcome — and is free to experiment with how they achieve it. Squads run their own experiments, measure their own results, and make their own decisions about what to build next. The result is an organisation that can run hundreds of experiments simultaneously, learning at a pace that no centralised planning process could match.

Customer feedback as navigation: the feedback loop that matters

The guided missile needs a signal to navigate by. In product development, that signal is customer feedback — not the feedback you collect in annual surveys or quarterly reviews, but the continuous, real-time signal of what customers actually do when they encounter your product.

There is a crucial distinction here between what customers say and what customers do. Customers are notoriously unreliable narrators of their own behaviour. They will tell you they want a feature and then never use it. They will say price is not important and then choose the cheaper option. They will claim to value sustainability and then buy the convenient product. The only reliable signal is behaviour — what people actually do when they have to make a real choice with real consequences.

“Get out of the building. The answers are not in the conference room. They are with the customers.”

— Steve Blank, The Four Steps to the Epiphany (2005)

Monzo's community-driven product development

Monzo, the UK challenger bank, built its product almost entirely through community feedback in its early years. The Monzo community forum — where customers could propose, discuss, and vote on features — was not a marketing exercise. It was the primary product roadmap input. Features that the community wanted and used were built. Features that sounded good in theory but generated no community engagement were deprioritised. The result was a product that felt genuinely built for its users, because it was — iteratively, experimentally, in direct response to what customers actually asked for and actually used.

The feedback loop that matters is short. The longer the gap between building something and finding out whether it works, the more expensive the learning. A team that ships weekly and measures daily learns fifty times faster than a team that ships quarterly and reviews annually. That learning velocity compounds. Over two years, the fast-learning team has made thousands of course corrections. The slow-learning team has made eight.

973×

more frequent deployments in elite teams

The DORA research programme found that elite software delivery teams deploy to production 973 times more frequently than low performers. But the more significant finding is what that frequency enables: elite teams recover from failures 6,570 times faster, not because they are more careful, but because they find out about failures sooner and fix them faster. Frequency of deployment is a proxy for frequency of learning.

Value creation as the compass: what are you actually optimising for?

Rapid experimentation without a clear value compass is just rapid activity. The guided missile needs a target. The target is not "ship features" or "hit the roadmap" or "meet the deadline". The target is customer value — a specific, measurable improvement in the life or work of the people you are serving.

Modern Agile's first principle — "Make People Awesome" — is the clearest articulation of this compass. The question is not "did we build what we planned?" but "did we make our customers more capable, more effective, more successful?" Every experiment should be designed to answer that question. Every measurement should be connected to that outcome. Every decision about what to build next should be driven by the answer.

Netflix and the North Star metric

Netflix's product experiments are all oriented around a single North Star metric: member retention. Not views, not ratings, not content volume — retention. Every experiment is evaluated against its impact on whether members stay. This clarity of compass means that the organisation can run thousands of experiments simultaneously without losing coherence. Teams do not need to agree on every decision. They need to agree on what they are optimising for. The North Star provides that alignment, and the experiments provide the learning.

The value compass also determines what you stop doing. One of the most important outputs of a rapid experimentation culture is the decision to kill things that are not working. This is harder than it sounds. Organisations develop emotional attachments to their plans, their features, and their roadmaps. Killing a feature that took six months to build feels like failure. In an experimentation culture, it is success — you found out it did not create value before you spent another six months on it.

“The most important thing is to find out you are wrong as fast as possible.”

— Jeff Bezos, Amazon shareholder letter (2015)

The three enemies of rapid experimentation

Most organisations understand the logic of rapid experimentation. Few actually do it. The gap between understanding and practice is explained by three structural enemies that most organisations have built into their operating model.

Enemy 1: Annual planning cycles

Annual planning locks resources, priorities, and commitments twelve months in advance. By the time the plan is approved, the assumptions it was built on are already partially obsolete. Teams spend the year executing a plan that was designed for a world that no longer exists, unable to redirect resources towards the opportunities and problems that have emerged since the plan was written. The solution is not to abandon planning — it is to plan at shorter horizons, with explicit review points, and with resources held in reserve for opportunities that cannot be predicted.

Enemy 2: Output metrics

When organisations measure teams by outputs — features shipped, story points completed, projects delivered — they create a powerful incentive to build things rather than to learn things. Teams optimise for the metric. They ship features that nobody uses because shipping is what is measured. They avoid experiments that might fail because failure looks bad on the dashboard. The solution is to measure outcomes — customer retention, activation rates, revenue per user, net promoter score — and to treat experiments that produce negative results as valuable learning, not as failures.

Enemy 3: Risk aversion in the approval process

In many organisations, every experiment requires approval from multiple layers of management. The approval process is designed to prevent bad decisions — but it also prevents fast decisions. By the time an experiment is approved, the opportunity it was designed to test may have passed. The solution is to push decision-making authority to the teams closest to the customer, with clear guardrails about what requires escalation and what does not. Amazon's "two-pizza team" principle — teams small enough to be fed by two pizzas — is partly about communication efficiency and partly about decision-making speed.

Psychological safety: the hidden prerequisite

Rapid experimentation requires something that most organisations do not explicitly cultivate: the willingness to be wrong in public. Experiments fail. Hypotheses are disproved. Features are killed. Products are pivoted. In a culture where failure is punished, none of this happens — or rather, it happens but is hidden, which is far more damaging.

Amy Edmondson's research on psychological safety is directly relevant here. Teams that feel safe to take interpersonal risks — to propose ideas that might not work, to report failures honestly, to challenge assumptions — learn faster than teams that do not. And learning speed is the primary output of a rapid experimentation culture.

2.3×

more likely to be top financial performers

A 2023 McKinsey study found that organisations with high psychological safety are 2.3 times more likely to be top financial performers. The mechanism is straightforward: safe teams surface problems earlier, experiment more freely, and learn faster. In an experimentation culture, psychological safety is not a nice-to-have. It is the operating system.

The practical implication is that building an experimentation culture requires building a safety culture first. Leaders who respond to failed experiments with blame or frustration will quickly find that their teams stop running experiments — or start running only the experiments they are confident will succeed, which is not experimentation at all. Leaders who respond with curiosity — "what did we learn? what would we do differently?" — build teams that experiment more boldly and learn more quickly.

What rapid experimentation looks like in practice

The principles are clear. The practice is specific. Here is what rapid experimentation actually looks like in organisations that do it well.

Short cycles with explicit learning goals

Elite teams work in cycles of one to two weeks, with each cycle having an explicit learning goal — not just a delivery goal. Before the cycle begins, the team articulates the hypothesis they are testing: "We believe that if we show users their spending breakdown on the home screen, they will engage with the budgeting feature more frequently." At the end of the cycle, they measure whether the hypothesis was confirmed or refuted. The delivery is in service of the learning, not the other way around.

Continuous customer contact

Teams that experiment rapidly maintain continuous contact with customers — not through quarterly research projects, but through weekly conversations, usability sessions, and behavioural data. The goal is to reduce the time between building something and finding out whether it works. Some teams embed a "customer interview Wednesday" into every sprint — a standing commitment to talk to at least two customers per week, regardless of what else is happening.

Explicit kill criteria

Before running an experiment, high-performing teams define the conditions under which they will stop. "If this feature does not increase activation by 10% within four weeks, we will remove it." This pre-commitment prevents the sunk cost fallacy — the tendency to keep investing in something because you have already invested in it. It also makes the decision to kill something feel like a success rather than a failure: you ran the experiment, you got the answer, you acted on it. That is exactly what you were supposed to do.

Retrospectives as learning infrastructure

The retrospective — a structured reflection at the end of each cycle — is the primary mechanism for converting experience into learning. But most retrospectives are too focused on process ("what slowed us down?") and not focused enough on learning ("what did we find out about our customers and our assumptions?"). Teams that experiment rapidly treat the retrospective as a scientific review: what did we hypothesise, what did we observe, what does that tell us, and what will we do differently next cycle?

The compounding advantage: why learning velocity wins

The most important thing about rapid experimentation is not any individual experiment. It is the compounding effect of learning faster than your competitors over time.

Consider two organisations. Organisation A runs quarterly planning cycles, ships features every three months, and reviews outcomes annually. In a year, it makes four major decisions and learns from four rounds of customer feedback. Organisation B works in two-week sprints, ships continuously, and reviews outcomes weekly. In a year, it makes twenty-six major decisions and learns from fifty-two rounds of customer feedback.

6.5×

more learning cycles over five years

Over five years, Organisation A has made twenty major decisions based on customer feedback. Organisation B has made 130. The gap in learning — and therefore in product-market fit, customer understanding, and competitive positioning — is not linear. It compounds. By year five, Organisation B is not five times better. It is operating in a different league entirely.

This is why the most successful technology companies are so difficult to displace. It is not that they have better engineers or more capital. It is that they have been learning faster for longer. Amazon has been running experiments since 1995. Netflix since 1997. Google since 1998. The compounding of thirty years of faster learning produces an advantage that cannot be replicated by a competitor who decides to start experimenting today — but it can be approached by any organisation that commits to the discipline now.

“Our success at Amazon is a function of how many experiments we run per year, per month, per week, per day.”

— Jeff Bezos, Amazon shareholder letter (2011)

Starting where you are: the practical path in

You do not need to be Amazon or Netflix to benefit from rapid experimentation. You need to start where you are, with what you have, and build the muscle incrementally.

Step 1: Identify your most important assumption

Every strategy, every product, every initiative rests on assumptions. Most of them are never made explicit. The first step is to surface the assumption that, if wrong, would most damage your current direction. Write it down as a testable hypothesis: "We believe that [customer segment] will [do this behaviour] because [this reason]." Then design the smallest possible experiment to test it.

Step 2: Shorten your feedback loops

Whatever your current cycle time — quarterly, monthly, fortnightly — cut it in half. If you ship quarterly, ship monthly. If you review outcomes annually, review them quarterly. The goal is not perfection at each cycle. The goal is more cycles, more learning, more course corrections. Speed of learning beats quality of any individual decision.

Step 3: Measure outcomes, not outputs

Replace at least one output metric in your team's dashboard with an outcome metric. Instead of "features shipped", track "activation rate". Instead of "projects completed", track "customer retention". The metric you measure shapes the behaviour you get. Outcome metrics create the incentive to experiment and learn. Output metrics create the incentive to build and ship.

Step 4: Make failure safe

Run a retrospective after your next failed experiment and explicitly celebrate what was learned. Name the hypothesis that was tested. Name what the data showed. Name what the team will do differently as a result. Make the learning visible and valued. This single act — treating a failed experiment as a success in learning — does more to build an experimentation culture than any process change or framework adoption.

Conclusion: the missile always finds its target

The fixed-scope, fixed-budget, fixed-timeline project is not a plan. It is a bet — a bet that the world will stay still long enough for the plan to remain relevant, that the assumptions made at the start will still be true at the end, that the customer who was interviewed in month one will want the same thing in month twelve.

That bet almost always loses. Not because the teams are incompetent or the plans are poorly made, but because the world does not cooperate. Markets move. Customers change. Competitors act. Technology shifts. The plan, however carefully constructed, is always a model of a world that no longer exists.

The guided missile does not make that bet. It launches in the direction of value, and it learns its way there. It is not aimless — it has a clear target, a clear compass, a clear sense of what it is trying to create for the people it serves. But it holds the path lightly, adjusting continuously based on what it finds in the real world.

The organisations that have mastered this — Amazon, Netflix, Spotify, Monzo, Booking.com — are not smarter than their competitors. They are faster learners. They have built systems, cultures, and habits that allow them to find out what works and what does not faster than anyone else. And in a world where the only constant is change, learning velocity is the only sustainable competitive advantage.

The missile always finds its target. Not because it flew in a straight line, but because it never stopped correcting.

The Guided Missile Principles

A summary of the operating principles behind rapid experimentation and value-driven learning.

  1. 1Fix the destination (customer outcome), not the path (features and scope).
  2. 2Every release is an experiment. Every experiment has a hypothesis.
  3. 3Measure what customers do, not what they say.
  4. 4The smallest test that answers the most important question is the right test.
  5. 5Failed experiments are successful learning. Celebrate them.
  6. 6Shorten feedback loops relentlessly. Speed of learning beats quality of any single decision.
  7. 7Kill things that do not create value. Sunk cost is not a reason to continue.
  8. 8Psychological safety is the operating system. Without it, experimentation is theatre.

References

  • Standish Group (2020). CHAOS Report 2020: Beyond Infinity.
  • Ries, E. (2011). The Lean Startup: How Today's Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses. Crown Business.
  • Blank, S. (2005). The Four Steps to the Epiphany. Cafepress.com.
  • DORA / Google Cloud (2023). Accelerate State of DevOps Report.
  • McKinsey & Company (2023). The State of Organisations.
  • Bezos, J. (2011). Amazon Annual Shareholder Letter.
  • Bezos, J. (2015). Amazon Annual Shareholder Letter.
  • Kerievsky, J. (2016). Modern Agile. modernagile.org.
  • Forsgren, N., Humble, J., & Kim, G. (2018). Accelerate: The Science of Lean Software and DevOps. IT Revolution Press.
  • Edmondson, A. C. (2019). The Fearless Organisation. Wiley.
  • Kniberg, H. & Ivarsson, A. (2012). Scaling Agile @ Spotify. Spotify Labs.
  • Booking.com Engineering Blog (2019). Continuous Experimentation at Booking.com.

Author

Davide Andrea Picone

Davide Andrea Picone is a consultant and practitioner with over two decades of experience across clinical practice, education, and business consulting. He specialises in helping organisations build the adaptive systems — experimentation cultures, rapid feedback loops, and outcome-driven teams — that enable sustained competitive advantage.

Share this article

Read next

Continue the thread.

Psychological Safety: The Mechanism Behind High-Performing Teams
Leadership16 min read

Psychological Safety: The Mechanism Behind High-Performing Teams

Rapid experimentation requires the willingness to be wrong in public. Edmondson's research explains why safety is the operating system for learning cultures.

Read article →
Making People Awesome: The Human Heart of Agile
Innovation14 min read

Making People Awesome: The Human Heart of Agile

The most commercially powerful principle in Modern Agile — and why the customer outcome, not the feature list, is the only compass that matters.

Read article →

Ready to Transform Your Organisation?

Let's discuss how these insights can be applied to your unique business challenges. Book a free consultation with our experts today.

Related Insights

Pirates and Holacracy: Lessons in Self-Organisation
Innovation

Pirates and Holacracy: Lessons in Self-Organisation

10 min read
Modern Agile: Beyond the Framework Wars
Innovation

Modern Agile: Beyond the Framework Wars

11 min read
Conway's Law: How Organisation Structure Shapes Software
Strategy

Conway's Law: How Organisation Structure Shapes Software

9 min read
Talk with Us