Building a product is complicated. There are many factors that drive the decision of what to build and when. You may get a feature request from an important client. Your customer service department might be flooded with tickets about a feature that doesn’t function intuitively. Your competition might have released some shiny new features and now you have to up the ante. You may want to reach into a new vertical to position yourself for market growth or a future acquisition. You might have to overhaul your architecture to scale with demand. Your CEO might have just returned from a conference and has a brilliant idea about a new opportunity. These and other factors are constantly clamoring for your attention and resources. How do you objectively decide what to build and when?
In my last article, The Grand Retrospective, I laid out a methodology for applying agile principles to the product planning process. Now I want to lay out a specific, step-by-step process you can implement to create a product strategy that ensures you’re working on the right things at the right time.
In my consulting practice, I see too many companies shooting from the hip. They are either too focused on short-term growth and revenue, or they can only think clearly about one problem at a time. This causes a number of problems down the product development pipeline. Since the engineers don’t know what they’ll be working on the future, they are unable to plan ahead and lay the technical foundation for the future. This results in a lot of rework and retrofitting down the line. The product team itself, being only focused on the next release, is unable to weigh priorities effectively. This results in fire-drills and reactionary mode when the thing they should’ve been planning for catches them by surprise.
After guiding my clients through this 5-step process, they are able to clearly see the big picture, feel confident that they are working on the right things, more effectively convey the product vision to engineering, and spend more time thinking about “what could be” as opposed to “what just happened”.
Step One: Discovering the Truth
Most companies have, or think they have, a process for product development. What happens too often is the chaos of our business takes over and the actual process falls far from the ideal. This is what I like to call comparing the “process in theory” to the “process in practice”.
As I discussed in my last article, the first step in the path forward is reflection. Forget about your process documents. How are product decisions really made in your company? Chances are the loudest voices, a high level executive or a key customer, have an outside weight in driving the decision-making process. You need to be honest with yourself about whether you’re making critical decisions based on solid information and well-thought-out procedure, or the personal opinions of your biggest personalities.
The first exercise to undertake is to review all of the features that have been released recently and determine where that idea originated from. Take an inventory and try to identify the relative weight between various sources of feature requests. You should have input coming from:
Customer support
Competitive analysis
Focus groups
Surveys and user feedback
Internal brainstorming sessions
Conferences and industry trends
Growth-hacking experiments
Analytics
And last but not least, Highly-Paid People’s Opinions, or HIPPOs. The representation of these channels should be fairly balanced, with outside weight given to the user’s voice, by way of market research, surveys, A/B testing, and analytics. When the HIPPOs yell loudly, it is your job to make sure the voice of the user is even louder. If you are the HIPPO, you need to learn to listen to these other voices.
Once we’ve held up the mirror to ourselves, we should clearly see our gaps and biases. We can now see where we need to improve and begin the task of developing and executing a strategic product roadmap.
Step Two: Defining Strategic Goals
We must begin with the end in mind. You should first engage in a future strategic planning session, to understand where you want to be positioned in the coming years. In this session you’re not defining specific features, you’re only discussing goals.
You’re looking at things like which new markets you want to enter, how large you want to grow, what new platforms or technologies you want to take advantage of, and what synergies do you want to establish with a potential acquirer.
I recommend using the Google system of Objectives & Key Results (OKR) for defining strategic goals. This model not only gives clear direction, it allows you to evaluate your performance at a strategic level during The Grand Retrospective.
This 1/3/5 year plan will set landmarks that you will drive towards with each product release. Every new feature should be looked at through the lens of achieving these goals. When we evaluate and prioritize our potential feature list, we want to choose the features that drive us towards our highest level goals, and not only which features will slightly tick up growth or revenue. This is the definition of strategy.
Step Three: Idea Capture
All of the sources I listed in Step One help to inform you what features you could implement to achieve your goals. We want to make sure that we aren’t filtering or drowning out any one of these critical channels: direct user feedback, internal experts in your organization, and industry trends. Assuming you’re starting from scratch, you should capture at least the top 10 ideas for new features or improvements from each of these channels independently. To do this you may have to engage in some new programs or implement some new tools.
There are three main ways of gathering user feedback:
Ask them — Asking them directly captures their intentions and emotional state. Engage your users in focus groups, interviews, and surveys. Try to understand their perspective and pain points when using your product. Collaborate with them on conceptualizing ways to improve.
Observe them — Watching them use your product shows you their actual behavior. Engage them in usability testing, user testing videos through services like Usertesting.com, and monitor sessions with tools like AppSee.
Analyze them — Analytics allows you to see patterns in the aggregate. Build event capture and derived attributes directly into your product and analyze them with tools like MixPanel, Looker, or any of the myriad BI and analytics platforms out there.
I emphatically believe that the user voice should speak loudest. I can’t tell you how many times I’ve heard a CEO or product leader bring up the Henry Ford misquote “if we asked the customers what they want, they would ask for faster horses” (He never actually said that, by the way!). This is a topic at the center of much of my writing and public speaking, so I won’t belabor it here. I’ve covered this topic in my article The Silver Bullet, as well as a recent conference talk, The Tao of Product Development.
Step Four: Creating the Model
Scoring — With a set of strategic goals and a list of possible features in hand, we now play a scoring game. We can go through each feature line by line and weigh it by significance against each of our key strategic goals. In this process, we debate about how powerfully a feature will drive us towards where we want to go, regardless of who came up with the idea. Every idea is weighed purely on its own merits and relevance to the company mission. We must establish an idea meritocracy.
Aside: Ray Dalio, one of the most successful hedge fund managers in history, wrote about this in his book Principles, which I highly recommend. In his firm, Bridgewater Associates, he created an idea meritocracy, where any member of the organization was expected to call out bad ideas, even if they came from himself. This is the best example I’ve seen of stamping out the HIPPO problem.
Pirate Metrics — In my last article, as well as my recent conference talk, I talked about using Pirate Metrics as a methodology for evaluating features. The Pirate Metrics funnel is excellent at evaluating impact on potential growth and revenue. While this is also critical, it is generic with respect to strategy. To add another level of sophistication, you may want to combine these two models to ensure that you are not only driving towards your strategic vision, but you’re giving relative priority to features that maximize growth and revenue along the way. An alternative to Pirate Metrics is the ICE model. ICE is more generic, but may apply to more varied industries.
Define Success — One critical aspect of this process is defining precisely what success looks like. The expected results of this feature should tie to the OKR of the strategic goal. This is not about what the feature is going to do, but about the impact it will have on the business. In our Pirate Metrics funnel, if we rate a feature high on a given business driver, we should be able to measure the outcome after it’s implemented. This allows us to create a feedback loop of reflection and continuous improvement that I discussed in my last article.
Engage Engineering — At this point you should have a clean list of possible features from all relevant sources and stakeholders, each evaluated and weighted against the strategic goals of the business and the practical growth and revenue driving capabilities. The critical remaining pieces are feasibility and level of effort. This is where you need to bring in your technical experts.
A problem that many of my clients face is when the product team developed their roadmap in a vacuum without consulting engineering, and when they finally presented the results, the engineering team blew it up. You want your technology leaders to have a seat at the table and weigh in on the feasibility and complexity of each of your high priority features. The earlier and more closely you engage your technology leaders, the better.
It may be that the technology is not available or mature enough to accomplish what you want to do. It may be that your architecture doesn’t support it and there are a number of maintenance or technical debt releases required to lay the foundation. These technical aspects should be included in the scope of each feature and accounted for in a Feasibility and Effort score. At this stage, it is sufficient to use a T-shirt size (S/M/L/XL) scale for level of effort, but each size should roughly equate to some unit of time, be it a sprint, a month, a quarter, etc. In our model, the terms for Technical Feasibility and Level of Effort are subtracted from the weighted sum.
Step Five: Mapping the Road
We have now built a sophisticated model for capturing, evaluating, prioritizing, and time scaling every potential feature in your roadmap. We can now calculate a weighted sum to determine the overall priority of the backlog, and the order in which you will implement each feature.
You can now create a timeline of releases that show precisely how you get from the current state to the future vision defined in your strategic plan. You will be able to clearly see how likely you are to accomplish those goals in the necessary time frame.
At this point, my clients typically comment on just how much work there is to do, and how little time there is for nonsense. This is the moment of realization that I strive for, and what I recall to them whenever something threatens to derail us from our strategic vision.
Conclusion
By starting with reflection, we make ourselves aware of our strengths and weaknesses, and ensure we are capturing ideas from the most critical sources. By engaging in strategic planning, we give ourselves goals and landmarks to drive towards. By objectively evaluating every potential feature against its impact on strategic goals, as well as growth and revenue, we create a meritocracy of ideas. By engaging technology leads and experts early in the process, we ensure we are working on attainable goals, and understand how long it will take to accomplish them. By following this process you ensure that you’re always working on the right thing at the right time.