Dan Rubinstein has a unique vantage point on the evolution of the product management craft. He joined Google as a PM in 2005. His initial role was to put in place processes and systems to help other product managers understand the efficacy of what they were building.
In 2005, Google Analytics didn’t exist yet. Running A/B tests was laborious and manual. Dan’s team had to shape product analytics virtually from scratch.
After (1) in-depth conversations with Dan about his experience as a product leader for Google, Facebook, and Palantir, and (2) countless chats with PMs through the course of building DoubleLoop, I’ve strengthened my view that an important puzzle exists:
While many of the analytics capabilities that Google built back in the 2000s are now available to all product developers, there are vital elements of how Google (and other top companies) use these tools that most of today’s companies still don’t apply. Consequently, even with a deep well of analytics at their disposal, most companies struggle to build valuable products.
Google had two key insights that are still poorly applied today:
Principle 1. The value of product changes can be quantified and tied to revenue.
I’m shocked by how many product folks insist that the impact of product development cannot be measured. Strong-willed founders or executives tend to push through their UX intuitions without interest in data-based evidence. Google realized that it was indeed possible to quantify the impact of product changes -- both in terms of revenue contribution and in terms of the product experience and user engagement.
Principle 2. Short-term versus long-term trade-offs must be examined relentlessly.
If Google optimized just for short-term revenue, they would have added blinking ads across their home page like an over-monetized news site. Rubinstein admired how the founding team at Google was passionate about user experience. They prioritized having a high-quality user experience for the long term over lucrative, but inevitably ephemeral, product changes.
Today, many companies sleepwalk through trade-off decisions, unaware or uncaring of tensions between short-term optimizations and long-term strategic bets. Product teams get lost in a number of ways:
- They neglect data feedback loops altogether. The team builds purely based on a strong product vision or intuition of a good UX. They miss opportunities to test their assumptions or monetize the value they created.
- They A/B test themselves into the corner of a local maximum. Based on business pressures or lack of imagination, the team over-invests in product changes that correlate directly to short-term gains. They lose sight of the impact on the long-term business trajectory.
- The team religiously tracks KPIs or lagging indicators pertaining to revenue and UX goals, but they don’t correlate metrics changes with product changes. Consequently, the team struggles to understand the impact of what they built.
Very few teams, from what Dan and I have observed, synthesize quantitative and qualitative measures to intentionally balance short-term optimization with long-term strategic bets. (Note: for an interesting analysis of “nested bets,” see John Cutler’s Beyond “Outcomes Over Outputs”.)
Believing in Google’s principles of product development is easy. Executing them during the chaos of company building is hard.
While Google broke new ground with analytics infrastructure, their process of examining product changes was equally vital to their success. The key to this process, Rubinstein explained, is executive attention. Google had regular meetings to review the full impact of product changes. Everyone looked forward to these meetings because decisions were made. It was exciting.
To provide a simple example, imagine a product change that increases clicks on ads but decreases clicks on organic search results. Should this change be shipped?
The short-term revenue implications are clear, but the long-term UX implications are not. To understand trade-offs, a variety of quantitative and qualitative indicators must be considered. It requires input from all the people who work on the product across a diverse set of functions spanning UX, product, engineering, sales, marketing, and finance.
Rubinstein’s team recognized that A/B testing, while a vital method, is not by itself sufficient for building a valuable product. There are a few reasons why:
- Product changes do not happen in isolation.
- A/B tests cannibalize and overlap each other.
- Metrics can be impacted outside of the scope of the A/B test.
Consequently, people must make judgment calls regarding trade-offs. They must constantly be on the watch for blind spots in their business awareness. If you neglect to incorporate a key UX consideration into your analysis, you could alienate users and run your company into the ground before you even realize it. As Rubinstein puts it, “if it were purely metrics, then you just let the robot run the company.”
It’s been over ten years since Dan Rubinstein helped build Google’s product analytics capability from the ground up. Dan and I discussed the state of product management tools today. While the analytics that are available to product managers have exploded since Rubinstein worked at Google, he described still a surprising tools gap:
I haven’t seen tools that ... integrate the analytics with the underlying root causes that are exogenous to the analytics. So not just what happened, but why did it happen and what can we do about it? … It would be a huge enabler for companies to do the right things, to build the right products.
Without tools like what Dan envisions, product teams are limited in their ability to apply Google’s product development process. He sees several problems:
- It’s hard to see the impact of product initiatives on long-term strategic bets. A/B testing helps you make short-term optimizations, but it doesn’t provide visibility into how your short-term bets compound to impact lagging UX and business KPIs.
- It takes a long time for product managers to learn how to build valuable products. When they join a new company, PMs need to accumulate a lot of domain knowledge about how to assess the efficacy of their product initiatives. If the key metrics and drivers were readily available, it would be an “equalizer of talent.”
- A/B tests can be confounded by product changes that happen externally to the test. For example, maybe there was a change that sped up the app or a bug that altered system behavior. Without an easy-to-access record of product changes and potential drivers, it’s hard to solve the mysteries of why metrics are moving one way or the other.
- Companies lack institutional memory of what moves the needle and what doesn’t. Product teams are unable to easily see the results of previous product bets to decide what types of future product bets to place.
Dan doesn’t foresee a world where product management tools tell you what product changes to ship or not ship. But he sees a ripe opportunity to improve how product builders make decisions that are right for their business.
Given his extensive experience as a product leader and unique role at Google, we were thrilled when Dan Rubinstein made the decision to invest in DoubleLoop. Rubinstein invested because DoubleLoop, he concluded, productizes the process for understanding the revenue impact of product changes.
We have a ways to go to realize the potential Dan Rubinstein sees in DoubleLoop. To help us get there, we’d love to thought-partner with like-minded product teams who share the vision.