Making decisions via experimentation | a combination of data & gut
How to keep up with rapid change in post-startup, but sub-scale SaaS where data is sparse but growing, and relying on "gut" stops working.
Do you remember the days of driving with printed maps? How did anyone ever get to where they needed to be on time? As a parent of 3 boys under 5, I imagine family life with young kids back then was confined to a 5 mile radius of your home. It takes a Nobel prize worthy effort to get all the kids in the car, plus packing, plus shoes, plus water bottles, plus snacks, (the list goes on). The silver lining for me is turning on Google Maps, typing in my destination and putting my trust in the blue line. I’m blessed knowing there was a time when navigation was exponentially more chaotic with paper maps.
It’s a silly analogy, but any executive or founder knows this feeling. In early stage B2B SaaS, the focus is on building the product and getting early customers. There’s not much meaningful data and insights to build upon. That’s because the product, the way customers are using the product, and go-to-market motions are changing so frequently that using data trends to make forward looking decisions is a bit meaningless given its fake level of precision. In addition, every minute spent on instrumenting and analyzing is a minute not spent with customers getting traction in the market. After all, a GPS does no good if there’s no car and no roads to begin with.
The other factor is that there’s plenty of empirical information in a sub-$5M ARR environment that a CEO or founder can hold in their heads. That information drives intuition which in turn informs decisions (their instincts). Because there’s organically a wide purview across the relatively small business, instinct can be right in many cases. Note, I'm not intending to oversimplify and overlook the pressure of decisions early on, merely speaking to the advantage of leveraging instinct over data at this stage.
As businesses grow, there’s 2 dynamics that warrant a different way:
Data volume - there’s just more stuff. More product usage on consistent use cases, more customer meetings, more team members, more pipeline, more information than what a single human can hold in their heads. That means a higher probability of “gut”-driven decisions will be wrong.
Frequency of change - for the data nerds out there, the same dataset with different business assumptions can lead to wildly different insights. Imagine the same sequence of turns, but change the starting address and unit of measure from kilometers instead of miles. You would arrive at a totally different location. As businesses scale, typically there are less frequent changes in the product and the business. Therefore, the lifespan that data holds meaningful enough insights to measure and drive future decisions is longer.
That said, it’s unrealistic to expect that a sub-$30M company has enough constants in the business to be inferring 100% of its decisions solely with data. I’ve seen this mistake when previously successful executives of $100M ARR companies try their hand at a SaaS company going through the awkward teenage years and they struggle because the world is changing at a much more rapid pace than they’re used to. The time that data has to “set in” before another change is shorter than the the executive is comfortable with for taking on informed risk. The business will always lag behind. In the example below, if time needed to make decisions with “good” data is just 50% slower, the business would have changed 4 times in the time that 2 decisions were made.
Having my roots in consulting, this is the classic way to approach problems. Understand and analyze the current state, develop future alternative solutions to the problem, and make a decision on what to go with. The pitfall is that this approach relies on steady current state data, and business assumptions that don’t meaningfully change between when the analysis is started and when the decision is implemented. Don’t get me wrong, this approach is very relevant if there’s reliable data and decisions can be made with very high confidence. For example in SaaS, the recurring revenue model should yield a consistent pattern of historical invoicing and payments. Subsets of the product may also have reliable data - for example, if there’s a tried and true foundational use case that customers always start with. The caution here is to not abandon this approach but to know when you need another way.
The GPS - a blend of data & gut
Ok, the efficacy of purely instinct-driven decisions is dwindling, but there’s not enough data to harvest meaningful, repeatable insights yet. Where do you go from here? If you take the above too seriously, the trap is to do nothing. It’s not worth putting in the effort because you’ll always be behind. As a CEO or founder, there may also be a fear of losing control of decisions.
There’s a few key principles to center on before talking about the path forward.
You can’t improve what you can’t measure. This isn’t about perfect measurement but having a starting rubric - the same way the steam engine evolved with the definition of horsepower.
You have to demonstrate learnings & progress. Board members and investors are looking for demonstrable evidence in moving Metrics that Matter forward.
You can’t rely on data alone. The business and product are still changing too fast to over-rotate on data-driven insights alone.
IAt the center is building a culture of experimentation and balancing “gut” (your instincts) with data to drive informed, calculated risks in your business. This is the GPS you’re building and it’s a hard one to teach - most people like either operating in a world that relies on “gut feel” or one that relies heavily on demonstrable evidence and analytics. The reality in growth stage B2B SaaS is that both types of worlds exist together.
Set the expectation with your team. This is the world we live in and in order to win, we must get conviction on our decisions between data we harvest and instincts we believe to be true (sometimes despite the data). If you’re an analytics junkie, surround yourself with teammates that thrive without data. If you make decisions based on gut 9 times out of 10, pair yourself with an analytical mind to check your perspective with evidence. This factors into day-to-day operations as well as for team members you hire.
Get centered on North Star product value. Ultimately you’re in business to solve your customer’s problems so be clear on the key value propositions and measurable outcomes that your products are intended to solve for. This becomes the anchor to test product capabilities, as well as pricing & packaging strategies.
Align on business Metrics That Matter. What outcomes are you driving for the business and what inputs effect those outcomes? Standard management accounting metrics are ARR, GRR, NRR, Gross Margin, CAC, etc. However, some move the needle more than others based on the type of business. For higher volume, low ACV businesses - it may be more about low CAC, high volume pipeline, and maximizing GRR. For lower volume, high ACV businesses, it may be more about NRR to land & expand within an account.
Once that foundation is set, then apply a proactive, experimental approach to driving insights and decisions in the business.
Instead of analyzing historical patterns in data, come up with a hypothesis on what needs to be true in order to drive positive product or business outcomes.
Instead of theorizing various future state options, structure tests and identify the minimum amount of data you’d need to double-down on experiments. i.e. get actual empirical evidence
Instead of positioning a decision based on falsely precise analysis, prove or disprove the hypothesis and iterate.
Example
Let’s say you’re focused on new ARR bookings. The inputs are [leads] x [ACV] x [conversion rate]. Then factor in [sales cycle time] for pipeline. Rarely is everything humming all at once. Let’s say leads are flowing fine but conversion rate is below benchmarks and pricing hasn’t been touched in a while.
Traditionally, one might approach it by analyzing the last year’s worth of closed won and closed lost opportunities, identify where in the funnel opportunities are being lost or stalling, then examine the reasons why they stalled, then come up with a list of alternatives to implement. Here’s where it falters:
The sales process may have changed 6 months ago, leaving definitions of marketing and sales stages not truly comparable for a full year
You may have lost a top seller and despite the trends, in smaller scale B2B SaaS, one high performing seller could sway the analysis meaningfully
Maybe you adjust the time frame to just the last quarter, but then the sample size of opportunities is too small to draw a statistically significant output
The approach given this variable, unreliable data would be this. Still go through the analysis but don’t bet your bottom dollar on the outcome. Some data is still better than no data. Then enrich it. Talk to sellers, watch sales demo recordings, do some secret shopping, conduct quick 3rd party market studies, and work backwards from where the business needs to be. Then formulate a hypothesis and test it. i.e. “
Where the business needs to be: In order to reach 30% growth, we need $1M in new bookings next year. At an ACV of $50k this would mean 20 new deals.
Constants: there’s reasonable confidence that we know where to invest to generate more leads.
Benchmark: For businesses similar to ours, the MQL to closed won conversion rate is ~10%.
What would need to be true:
Perhaps a new channel is generating more leads than it has previously, and those leads aren’t being as well qualified. Put a BDR on it with specific focus and see if that yields a difference.
Perhaps there’s a competitor starting to encroach in your ICP more than before. Test some new battle cards with the sales team.
Perhaps opportunities stall after demos. Redesign and test a more consultative discovery, demo and business case process.
Motivating teams
I’ll end on this. Aside from making the “right” decisions, one of the most meaningful impacts is the ability to motivate and set your team up to hit home runs, over and over again. In Danny Coyle’s book The Culture Code, he describes high-purpose environments as ones that provide clear connections between the present moment to a meaningful future goal. In other words, there’s something innate in humans that’s motivating to close a gap and embrace the hope of victory.
One of the biggest challenges I faced leading operations was having no insight on services and support profitability. We knew at a high level that we were burning more cash than we were bringing in. We also knew that there were plenty of low hanging fruit on operational efficiency improvements. But we didn’t have a definition of gross margin (present moment), we didn’t have a target (meaningful future goal), and we didn’t have a set of hypotheses on levers to test. Most importantly, we couldn’t describe the problem to the company and charter the team on specifically where they could help solve the problem. The minute we had measurable gross margin and customer profitability, it changed the game - we started seeing month over month improvement from -30% to well above breakeven. We knew what we needed to do on testing pricing with customers, we had educated guesses on where productivity needed to improve, and we had a hypotheses to test selling smaller engagements to better level-load demand. Was it perfectly data-driven and predictive? No way, but it allowed us to define the current state, what good looked like, and to motivate the team toward that goal.
I’ve seen the same effect in regards to redefining measurement of ARR, understanding CAC, lead scoring, pipeline visibility, and product usage. Invest the time and resources to get the minimum business intelligence you need to see the metrics that matter for your business, build a culture of experimentation, and start testing. It’ll pay 10 fold in business outcomes and take a lot of stress out of motivating the team, as well as answering to your board.
Are you seeing over-reliance on imperfect data to drive decisions? What successes or learnings have you had in leveraging experimentation in your business? Comment below!