The Grand Retrospective — Applying Agile Principles to Your Product Roadmap

The Grand Retrospective

The core principles of the Scrum Manifesto, and why it gained such a cult following, are reflection and continuous improvement. The purpose of Scrum is to tighten the iteration cycle and hold a retrospective at the end of each cycle, in which you analyze and adapt the process to continuously improve. This tight feedback loop allows teams to gel quickly, establish an efficient process, eliminate waste, and adapt to changing priorities easily. This is what makes agile development so great.

These principles can be applied at many different levels of the organization. In particular, we should be applying these principles in our strategic product planning. By applying a process of experimentation, measurement, reflection, and continuous improvement to the roadmap planning cycle, we can have the same benefits in product management as we do in software development.

Idea Capture

We must first ensure that all product ideas are captured and processed consistently, regardless of their origin. This means that the CEO’s ideas are processed the same way as feedback from customer support. All ideas must follow the same process of evaluation. The product manager’s job is to create a funnel of ideas from various sources, capture them in the roadmap, and evaluate them independent of their origin.

Tools like Aha! or ProdPad can assist in this process, but I am fine with just a plain old spreadsheet. What’s important here is not the document itself, it’s the ceremony in which you review these product ideas and evaluate their impact on the business in a deterministic way. I recommend holding an in-depth roadmap planning session at least once a quarter, ideally once a month, in which you evaluate new ideas and reassess existing ideas on the roadmap.

Pirate Metrics

I am a big fan of Dave McClure’s Pirate Metrics framework for evaluating product ideas. In this framework all product ideas are evaluated and scored against a common set of business drivers that are critical to the company’s success. While the drivers themselves may differ depending on your industry, you should define which drivers matter most to you. All product ideas are then evaluated based on these criteria, and not based on which person speaks the loudest. It forces you to make objective decisions, and hold structured debate about the expected results of features.

You begin by defining the critical business drivers that matter to you. For internet startups, these drivers are:

Acquisition
Activation
Retention
Referral
Revenue

Once you’ve defined your business drivers, you define the key events that define your funnel. Here is an example funnel:

Pirate Metrics

This process is similar to poker planning in Scrum. Different people may argue about whether a feature scores a three or a five on a given driver, but it forces everyone to debate based on the things that really matter to the business. Similar to Scrum, it takes practice, but eventually you will find a rhythm.

Chaos to Clarity

Once all features are scored against the critical business drivers, a simple weighted formula will determine the relative priority of features. At least once a year, you should set relative weights for each business driver based on the needs of the business. For example, if you’re great at acquiring new users, but having trouble converting them, you should give Revenue a higher weight than Acquisition. The weighted sum of each variable times their respective weight will give you an overall score, which can be ranked to determine the priority.

As long as everyone is committed to the process, nobody has to decide which features should be developed next. The model will decide for itself. This ensures that you are always building the right features at the right time, based on the KPMs of the business.

Growth Hacking Your Roadmap

Growth Hacking Your Roadmap

The Growth Hacking discipline that has taken the startup world by storm is essentially the scientific method applied to product development. The process is as follows:

Develop a hypothesis
Develop an experiment to test the hypothesis
Establish criteria for success or failure
Execute the experiment with predefined cohorts and control groups
Isolate the experiment to control for the hypothesis
Measure the results
Confirm or deny the hypothesis.

After the roadmap has been prioritized, you should develop a hypothesis, experiment, and measure of success for these roadmap items. Your releases should be as small as possible to isolate the experiment. You should develop the experiment into your release by creating cohorts, a control group, and a number of A/B test scenarios. After releasing a new feature, you should measure over time the effect that feature had on your business drivers.

For example, if you have a feature that you believe will have a large impact on conversion, you should be able to measure the impact that one feature had on conversion after release. After several weeks or months, you should be able to isolate the groups of individuals that registered at the same time, split them between their cohorts, and measure the relative effect on conversion. You should then record the results of that experiment on the roadmap document itself.

A good way to implement this is to create a tagging mechanism in your user creation function. When a new user is created, a simple tag is added to their user record. This is stored in the database and can be queried later for analysis purposes. During each release, you should have a bit of logic in your user creation function that places the user in a certain cohort. The application should then provide that user with a different experience based on their cohort assignments. This is essentially conditional logic that turns features on or off based on the user’s tags. After a period of time, you can run a report to compare the results between cohorts.

The Grand Retrospective

During your next roadmap planning session, you should begin by reviewing the results of the releases pushed since the last session. If you follow this process you should be able to easily compare the expected impact on your business drivers against the actual impact. If the results match your expectations, you can give yourselves a pat on the back. If the results were not what you expected, you should use this opportunity to reflect on your decision-making process.

If you are consistently wrong about the impact your product ideas have, you are either terrible at strategic planning and should be fired, or more likely, you are simply not making decisions with a complete set of information. If you put too much emphasis on highly-paid people’s opinions, you aren’t performing your basic market research, or you are not engaged closely with your customers, you are likely to make bad decisions.

Continuous Improvement

This process shines a spotlight on your ability to make good product decisions. If your results do not consistently match your expectations, you need to change how you make these decisions. You should analyze these features that did not perform as expected and try to find some common pattern. You should take this time to reflect and analyze what it is that may be causing you to make bad decisions, and be willing to change your process. Chances are you are putting too much emphasis on internal ideas, and not gathering enough feedback from your users. That is the subject I address in-depth in The Silver Bullet.

The retrospective itself should be structured as well. You should have a set of questions that you ask about every new feature to determine the root cause of the unintended outcome.

From where did the idea originate?
Is this a feature we’ve seen before in other products?
Did we validate this concept with our users before developing and putting it into production?
Did we develop and execute the experiment properly?
Did we sufficiently control for error?
Did our user feedback explain why this feature perform the way that it did?

These and other questions will help you determine what it is about your process you need to change.

Conclusion

By taking the best parts of Scrum and applying it to the strategic planning process, we are able to make more accurate and consistently correct decisions. By committing to a process of reflection and continuous improvement, we ensure that we are building the right product the first time.

This methodology creates a structure around strategic planning, in which you are putting emotions and opinions aside, and making decisions based purely on the objective impact to the business. If done well, you will be able to efficiently and effectively assess and prioritize features, measure the results, and continuously improve your process over time.