In the PMHQ Slack community, we regularly get thought-provoking questions that we feel should be explored in-depth and documented for future reference. We’re starting a new set of Q&A posts called Highlights to dive into these kinds of questions, and enable everyone in the community to revisit the answers and contribute further!
“What do you think about creating blended metrics (a series of metrics, rolled-up into one)? How do you generally decide the weight to give to each particular metric in the blended metric?
For example, let’s say we want to improve “User Enjoyment” for a travel booking site. Imagine that we already have the following raw metrics: net promoter score (NPS), look-to-book (percentage of people who make a purchase out of all people who visited), repeat bookings, and referrals.
I’d like to track one metric from this, so creating a blend seems reasonable. I’m thinking about doing the following weights: NPS (20%), look to-book (30%), repeat bookings (30%), referrals (20%).
So the Blended Metric would be: (NPS% * 0.2)+(Look to Book % * 0.3)+(Repeat-Booking% * 0.3)+(Referral% * 0.2)=New Metric
Thoughts?”
– Fred Rivett, Co-Founder at UserCompass.com
This question raised lots of interesting perspectives and insights from our community!
First, our contributors raised three different perspectives on blended metrics. Then, they debated how to implement blended metrics, as well as the caveats that come with blended metrics.
Below is the summary of their discussion.
Perspective #1: Focus on One Metric at a Time
Barron raised the point that blended metrics don’t tell the whole story.
When using a blended metric, it’s possible that if one component increases at the same time that another decreases, the observer would see no change to the blended metric. They then might incorrectly conclude that nothing had happened.
Therefore, Barron suggested having a single core metric as the focus, while still monitoring the others to ensure they don’t fall below some threshold.
He felt that it would be best to focus on one metric at a time, then rotate which metric to actively target for improvement. Over time, one would eventually improve all of the critical metrics.
Perspective #2: Blended Metrics are Just One Metric Out of Many
Mark raised the point that a blended metric is just one metric out of many. He compared this “Enjoyment” blended metric to the concept of measuring engagement.
Engagement itself consists of multiple actions, and each action should have its own measurement. When measuring engagement, one looks at both the high level engagement trends, as well as the detailed metrics for each particular action that comprises engagement.
In Mark’s experience, when using blended metrics, it’s rare for only one of the components to move at a time. The reason these metrics are blended together is precisely because they are related to each other.
Speaking to the concern that one might be misled by a blended metric, Mark argued that one should still have the ability to track each of the individual components, and that a blended metric simply enables one to track the long-term health of “Enjoyment”.
In Mark’s view, the tricky part is determining the weight of each component within the blend, since the weighting itself can create unwanted bias.
Mark found that it was helpful to use blended metrics to share with executives, since that way he didn’t need to share updates on each of the subcomponents as well. At his last position, one of the blended metrics he reported out to the executives was a blended Referral Metric made up of 6 subcomponents. Any time Mark saw the blended Referral Metric drop, he looked into the 6 subcomponents more deeply to diagnose the issue.
Perspective #3: Ratio Metrics Can Yield Even More Value
Manu mentioned that instead of using blended weighted metrics, one could compare metrics against each other instead.
By comparing metrics, he argued, one can get a much better picture.
For example, customer lifetime value (CLTV) should be compared to customer acquisition cost (CAC) in a ratio. Both metrics on their own can be misleading. For example, a high CAC in isolation might raise concerns, but if CLTV is many multiples higher than CAC, then the product is actually doing quite well.
Similarly, a low CLTV is not cause for concern on its own if CAC is dramatically lower as well. On the flip side, even a modest CAC is problematic if CLTV is lower than CAC, because then the product is actively destroying value rather than creating value.
How to Implement Blended Metrics
When creating a blended metric, one must select component metrics, and one must weight the metrics.
Manu argued that selection of component metrics should be tied to underlying assumptions. In this example, we combined the metrics of NPS, look-to-book rate, repeat booking rate, and referral rate to create an Enjoyment metric.
That implies the following assumption: we believe that one can accurately measure enjoyment, and can tie enjoyment to how likely one is to recommend a product (NPS), how likely one is to use the product successfully (look-to-book), how likely one is to use the product again (repeat bookings), and how often one actually refers the product to others (referrals).
Manu also emphasized that one needs to consider the state of the product, and whether the organization is aiming for growth or revenue. Only by considering these factors can one create a useful blended metric.
Caveats
Barron pushed back against blended metrics because he noticed that it takes more time to get buy-in from stakeholders. From his experience, he had to continuously explain this new manufactured metric, as well as the underlying assumptions.
From Barron’s view, he felt that a single metric can be a strong proxy. For example, instead of “enjoyment”, one could look at NPS, which is a complex measure everyone knows and understands. Similarly, instead of “engagement”, one could look at daily active users (DAU) or monthly active users (MAU), both of which are conventional metrics across industries.
In response, Mark called out that NPS doesn’t directly tie to any conversion rates, which means that it is not directly tied to revenue. Furthermore, Mark highlighted that NPS is a lagging metric, and is not easily tied to specific features or changes.
Mark argued further that DAU doesn’t enable one to understand monetization. A blended engagement metric could include both activity and monetization as subcomponents, and therefore better demonstrate the health of the product.
Barron countered that metrics themselves are artificial measurements that we create, and that there is no magic metric that solves all problems.
His point was that it’s easier to focus and strengthen one metric at a time. Barron argued that one should identify the most impactful metric, strengthen it, then move to the next metric.
Summary
In conclusion, product managers should choose metrics that enable them to drive impact. Some prefer pure metrics to focus on sequentially, some prefer blended metrics to get a higher-level view, and yet others prefer ratio metrics to better understand a customer’s lifecycle.
In any case, the message is clear: use the metrics that will enable you to deliver results to the business while satisfying your customer. Derive your metrics from the goals that you have for your product, and get buy-in from your stakeholders.
About Our Contributors
Barron Caster is a Growth PM at Rev.com.
Manu Gupta is a software developer at Genomenon.com.
Fred Rivett is the co-founder of UserCompass.com.
Mark Stephan is SVP Product Management and User Experience at BoomWriter Media.
Have thoughts that you’d like to contribute around blended metrics? Chat with other product managers around the world in our PMHQ Community!
Join 30,000+ Product People and Get a Free Copy of The PM Handbook and our Weekly Product Reads Newsletter
Subscribe to get:
- A free copy of the PM Handbook (60-page handbook featuring in-depth interviews with product managers at Google, Facebook, Twitter, and more)
- Weekly Product Reads (curated newsletter of weekly top product reads)
Clement Kao has published 60+ product management best practice articles at Product Manager HQ (PMHQ). Furthermore, he provides product management advice within the PMHQ Slack community, which serves 8,000+ members. Clement also curates the weekly PMHQ newsletter, serving 27,000+ subscribers.