🏴☠️ ⚡️ Issue #16 - OKR Tear Down: Decision-making framework for strategic investments
Welcome! This newsletter is dedicated to acquiring and operating Micro SaaS firms. Join us every Saturday morning for Deal Tear Downs, Operating Concepts, and more!
We’re back with Issue #16!
Here’s what to expect this month:
Part 1 — 🎯 DEAL TEAR DOWN - Workflow Management SaaS for Print-on-Demand Facilities at $437 ARR
Part 2 — ⚙️ OPERATING CONCEPT - Mapping the buyer journey to inform content gaps and beyond
Part 3 — 🛠️ OKR TEAR DOWN - Decision-making framework for strategic investments
Part 4 — 🏆 PORTFOLIO PERFORMANCE WRAP - Update on Sprint #4
🛠️ OKR TEAR DOWN
📺 WATCH:
📻 LISTEN:
As a point of departure, I’ve included the full OKR below to set the foundation for a teardown of our approach thus far…
OBJECTIVE: Budget and prioritization of strategic investments for transition to the ‘Grow Phase’
KEY RESULTS:
Successfully pivot retainers to project-based
Establish a decision-making framework for strategic spend items
A defined list of strategic spend items (informed via framework) and working capital forecast
As a bit of additional context, leading up to this point, we worked via a defined budget purposed toward Fix and Improve activities. We consider these activities table stakes to establish a proper foundation for the business; thus, this was a bundle of activities and budgets based on playbooks / experience and there was less need for a decision making framework to inform priorities. With that, let’s get into it…
ESTABLISH A DECISION-MAKING FRAMEWORK FOR STRATEGIC INVESTMENTS
Budgets introduce scarcity, which creates tradeoffs in your decision making. As an example, Jane only has $100 a month to spend on entertainment. She can either go to the theme park or see a concert, though she can’t afford to do both. She is now forced to make a decision. What brings her more joy? Which is better suited for a date versus a hang with friends? Prior to setting a budget and really thinking through decisions based on what she values, she would have done both and ran out of money…
Running out of money is not an option. As we transition to growing the business, we must introduce scarcity and establish an objective framework for comparing options.
There are a WIDE range of prioritization frameworks, which are mainly applicable to product roadmaps and feature releases. Here are a few examples:
The Kano Model (Source)
The Kano model plots two sets of parameters along a horizontal and a vertical axis. On the horizontal axis, you have the implementation values (to what degree a customer need is met). These values can be classified into three buckets:
Must-haves or basic features: If you don’t have these features, your customers won’t even consider your product as a solution to their problem.
Performance features: The more you invest in these, the higher the level of customer satisfaction will be.
Delighters or excitement features: These features are pleasant surprises that the customers don’t expect, but that once provided, create a delighted response.
The MoSCoW Method (Source)
The MoSCoW method allows you to figure out what matters the most to your stakeholders and customers by classifying features into four priority buckets. MoSCoW (no relation to the city—the Os were added to make the acronym more memorable) stands for Must-Have, Should-Have, Could-Have, and Won’t-Have features.
Must-Have: These are the features that have to be present for the product to be functional at all. They’re non-negotiable and essential. If one of these requirements or features isn’t present, the product cannot be launched, thus making it the most time-sensitive of all the buckets.
Example: “Users MUST log in to access their account”
Should-Have: These requirements are important to deliver, but they’re not time sensitive.
Example: “Users SHOULD have an option to reset their password”
Could-Have: This is a feature that’s neither essential nor important to deliver within a timeframe. They’re bonuses that would greatly improve customer satisfaction, but don’t have a great impact if they’re left out.
Example: “Users COULD save their work directly to the cloud from our app”
Won’t-Have: These are the least critical features, tasks or requirements (and the first to go when there are resource constraints). These are features that will be considered for future releases.
The MoSCoW model is dynamic and allows room for evolving priorities. So a feature that was considered a “Won’t-Have” can one day become a must-have depending on the type of product.
The above are thoughtful and have stood the test of time, though they are highly specialized to product decisions and fall short for informing decisions in the broader context of operating. A few examples:
Should we invest in creative / digital assets for our ambassadors or feature XYZ?
Should we buy ads or implement a chatbot for support?
As we studied the wide world of these frameworks, we fine-tuned a view on our use cases, and thus the requirements.
THE FRAMEWORK WE CHOSE
The most modern and widely used prioritization frameworks generally involve a tradeoff between impact and effort. In terms of applicable business activities, here are a few examples:
Paid acquisition
Growth hypothesis tests
Product roadmaps
Talent / Hiring
Let’s further define the respective components of the RICE framework:
REACH — How many users will this activity / initiative touch in a given time period? This is typically represented as a percentage of all users and a timeline based on when the activity will be in the wild. You might also consider the number of customer segments it will touch, etc.
IMPACT — How much will this impact our explicit business objectives? This is a bit trickier as goals vary by domain / function, though ideally, impact is a function of your north star metric. For instance, if retention is the #1 priority, try to quantify impact based on improvement to retention rates (perhaps substantiated by solving most common churn reasons…).
CONFIDENCE — How confident are you in the estimations of Reach and Impact? “Without data, your just another person with an opinion.” Ideally this is based on product usage analytics, customer surveys, models rooted in historical base rates, and so on.
EFFORT — How much time will this require across participants? This is an hourly sum, where you apply associate respective costs to achieve an all-in investment range. As a best practice, I’d suggest adding 20% to any hourly estimate provided by the team 😑 …
ACHIEVING A COMPOSITE SCORE
The inputs you use to measure the dimensions defined above will vary. Perhaps some are percentages (%), while others are hours (#) and some are money ($). To achieve a composite score, we need to normalize how we score the respective items. From there we can combine the scores for each item to achieve an all-in score. The holy grail is a table you can sort by ‘all-in’ score and away you go.
We’re big fans of scoring items on a 1 (not great / low) to 4 (awesome / high). We avoid ranges like 1 to 5, as you’ll end up with a bunch of 3’s and that’s not super helpful...
Here’s an example to bring this to life:
Activity: Email Template Module - Refresh UI / UX
REACH — 4 /4
All customer segments
This quarter
IMPACT — 4 / 4
(Retention = north star metric)
60% of churned users last quarter left due to frustration with creating and sending template emails. Workflow automation was the next most common churn reason at only 5%.
Solving for this churn cause would have preserved $2k in MRR
CONFIDENCE — 4 / 4
(mostly a function of how data-driven the above scores are…)
EFFORT - 4 / 4
Requires Design, Dev, QA and Customer Success for a total of 60hrs
Total cost of respective labor against hourly estimates is $12k
(Keep in mind the score here is VERY relative to other all-in costs, or the overarching budget…)
You might be saying to yourself, ‘what a coincidence, 4 out of 4’s across the board in the example. Not very realistic.’ The catch here is how the respective items are factored in a composite score. The most common approach includes effort as a denominator. Said another way, effort is punitive.
Following the above, here’s where we land:
Activity: Email Template Module - Refresh UI / UX
Composite Score: 3
(4 * 4 * 4) / 4 = 3
As with any fraction, the denominator has a HUGE impact so you start to see that effort is critical. Said another way, pace and quality of execution (aka effort) is worth a lot more than reach, impact or confidence. As such, make sure you really vet estimates of time, and as we all know, dope talent makes the world go ‘round.
To drive the point home, let’s say this activity scored a 2 on effort, the composite score doubles to 6.
To wrap up, budgets / scarcity are a critical component of resource optimization. In the face of something finite, you need an objective way to easily compare and pursue options.We found the RICE model to be most compelling and suitable to our context, though this exercise in itself is a very useful way to think about your long-term goals, immediate priorities, and what you value in sound decision making. I can’t recommend it enough.
Go get ‘em!