Great growth leaders go beyond a superficial strategy of “measuring and testing.” They need to be competent in at least two out of three skills:
A. Defining sound business models that capture the core levers of how your business grows B. Executing rigorous and thoughtful testing and experimentation programs C. Building sustained value that solves real problems
Let’s chalk out the frameworks to build and apply these skills. We’ll also include examples and tips to see them in action.
To build a growth engine, the most important skill is recognizing the levers that drive the business. For example, we can break down Amazon’s revenue drivers as follows.
Revenue = customers x purchases x frequency
Breaking this down further,
customers = visitors x conversion to purchase purchases = products x quantity x price products = categories x items per category frequency = f (repeat purchase behavior)
To grow, Amazon must do one or more of the following:
Increase frequency of purchases (repeat usage)
Increase items in each category (category coverage)
Increase number of categories (category expansion)
Increase the number of products per purchase (cart value)
Increase conversion to purchase (conversion rate)
Increase visitors (top of funnel)
Turns out that the top of the funnel is the least important lever for the business. This is because if you deliver on the promise to solve real problems for your customers early and often, the funnel will automatically fill itself. You just have to remove friction where it exists.
Effective growth leaders know where to experiment, which part of the funnel to focus on, what’s the right test to run vs. one that’s not interesting. Here are a couple of tips to get started on a framework.
While large companies like small wins, small companies enjoy lots of low hanging fruit. Amazon and Facebook would love to grow 1% every week because these small wins add up quickly with their large numbers. When you don’t have as many users, you should be more dramatic about the changes you aim for. For instance, to validate a 20% improvement over control, your test can complete within 7 days with just 1000 users assuming a 5% baseline conversion rate.
You have a large sample size with low base rate at the top of the funnel, and a small sample size with a high base rate at the bottom of the funnel.
Raising the base conversion rate at the top of the funnel by 10% takes it from, say 5% to 5.5%. This experiment would need a large sample size and would need to run for longer to achieve statistical significance.
Raising the base conversion rate by 10% at the bottom of the funnel takes it from, say 20% to 22%. This experiment would need a much smaller sample size and would complete within a couple of days.
With basic funnel math, it’s easy to see why you should run experiments deeper in the funnel. Especially with a product that requires a significant purchase decision towards the end, you want to run tests where customers enter their credit card details rather than the home page. A committed customer willing to pay is also more likely to respond to a change in design or experience.
Customer acquisition is a function of channel, targeting, content, and conversion.
Any new paid acquisition channel should be incremental to your current acquisition channels. It should help acquire customers that you would not have gotten anyway. Measure the marginal RoI, not the absolute RoI.
Target customers based on what they did instead of who they are. Past behavior predicts future behavior far better than any stereotypical cohorts or personas that we create in our minds. Find the signals of past behaviors that you care about, and use the triggers that invite behaviors you want to optimize for.
Content should have a call to action (CTA). Provide the context and personalize it. CTAs are a great candidate to test variants and simply let the best one win (see Autotune).
To improve conversion, it’s critical to define what purpose the conversion serves. If you want to grow registered users, take them directly to registration (e.g. the sign up page). If you want to grow active users, take them to the page that engages them instantly (e.g. results for the search term on your e-commerce site). If you want to grow qualified leads, take them to a page that lets them become qualified (e.g. run a prototype).
If there’s one thing you take away today, it’s that you cannot organically grow something that sucks. This is why your growth lead needs to be a product person. Most meaningful changes that you experiment with come from a deep understanding of your customers.
Retention is a key metric to test if the product solves a real problem for customers. If you need a number to validate your product-market fit, it’s retention. There are generally three kinds of retention curves: flat, declining, increasing.
Retention can increase, for example, (i) with network effects such as on social platforms, (ii) with new products, categories, geographies, (iii) when the customer grows their frequency of usage, or (iv) customer expands consumption as more individuals in the family/team switch to using the product. Figure out what the retention curve for your business should look like.
More tactically, don’t let churn overwhelm new customer acquisition by letting your retention slide. Growth is a function of new users acquired, churned users, and resurrected users. Just tracking churned users will ensure that you’re not losing money faster than you can walk to the bank. Understand why they’re churning and which channels and information can help resurrect them.
Magic moments are when your customers “get” it, they understand what your product is about. It’s the moment they discover that it does something that they’ve never seen before. On Facebook, it’s when you see a friend. On Uber, it’s when you step out of the car at your destination with zero hassle.
You can do this the non-scalable way: ask your early customers why they use the product. You might even have an instinctive sense of what gets someone’s eyes glued to the product.
You can also do this the scalable way: find correlations. The number of items a customer purchases on Amazon is likely correlated with how often they return, or perhaps that they’re a prospective Prime member.
The company may target a strategic objective centered around an output or a north star metric. At the highest level, it could be revenue. More specifically, it could be number of hosts for Airbnb, number of posts for Twitter, number of orders for DoorDash, viewing time for YouTube, consumption of compute credits for Snowflake, and so on.
Such north star metrics are challenging to move. For example, moving year-over-year revenue growth from 25% to 26% is a big deal, but often occurs +0.1% at a time. The challenge with such north star objectives is that while the inputs are what we can control, the objectives (outputs) are what really matter. The task at every level of the organization is to decode the relationship between inputs and company-wide objectives. Measure teams against goals that track inputs and let the objectives take care of themselves.
This is the final one. Eventually you will run out of intuition and you will need data to identify blockers and experiment with new ideas. With the right metrics, data gives you empathy for your users to put yourself in their shoes.
Data also helps you make decisions faster. If everyone knows what metric to optimize for, they don’t have to kick the ball to their data science team or wait for two weeks to get on the leader’s calendar to make the decision to ship.
Data gives you a glimpse of the future. When an A/B test can show you how a feature will perform, use it! See how Statsig can serve up an A/B test for free along with a feature gate.
Got more tips? Join us on the Statsig Slack channel!
The Statsig <> Azure AI Integration is a powerful solution for configuring, measuring, and optimizing AI applications. Read More ⇾
Take an inside look at how we built Statsig, and why we handle assignment the way we do. Read More ⇾
Learn the takeaways from Ron Kohavi's presentation at Significance Summit wherein he discussed the challenges of experimentation and how to overcome them. Read More ⇾
Learn how the iconic t-test adapts to real-world A/B testing challenges and discover when alternatives might deliver better results for your experiments. Read More ⇾
See how we’re making support faster, smarter, and more personal for every user by automating what we can, and leveraging real, human help from our engineers. Read More ⇾
Marketing platforms offer basic A/B testing, but their analysis tools fall short. Here's how Statsig helps you bridge the gap and unlock deeper insights. Read More ⇾