I had the pleasure of chatting with our exceptional Software Engineers, Pierre Estephan and Alex Coleman. Our discussion revolved around metrics and analytics within the context of experimentation.
We delved into real-world examples showcasing how companies like Facebook, and some of our valued Statsig customers harnessed user insights to propel product-led growth. In light of our recent launch of Metrics Explorer, we wanted to discuss how having an integrated platform for experiments and analytics can empower organizations to unlock insights that drive growth. You can watch the recording of the event here:
It's challenging to directly impact high-level KPIs like retention or engagement. Therefore, it's crucial to understand the key drivers that lead to these goals. In all the examples we covered, including the famous Slack example where "messages sent" was the key driver of retention, companies identified what mattered most to their users and worked toward that, ultimately driving growth.
We discussed the concept of novelty effects, using notification fatigue as an example to emphasize the importance of going beyond instant gratification. It's not just about improving DAU in the short-term; it's about observing the long-term impact with metrics like 28-day retention.
Correlation doesn't imply causation, so it's not as simple as blindly pursuing a metric that appears to be correlated with retention. Instead, it's about understanding how users realize value. We emphasized the importance of understanding the qualitative aspects behind a metric, like how Facebook's famous "7 friends in 10 days" wouldn't have worked with automatically added friends for new users; it required relevant suggestions, for example.
Leading PLG companies avoid the biggest operational challenges of running experiments at scale (such as having too many tools in their stack, siloed data, and disagreements between teams on feature results) by maintaining a single platform for consistency. Everyone across the company, including non-technical users, has visibility into metrics. This approach keeps engineering and product teams agile, free from cumbersome processes, and able to build, measure, and learn rapidly.
We discussed examples where companies were able to achieve double-digit growth in core metrics by identifying drop-off points in the customer journey, such as the login or checkout funnel. They used these insights to capture low-hanging fruit by testing features to close these gaps. For more stories about our own customers who achieved remarkable growth, you can visit our customer stories page.
The Statsig <> Azure AI Integration is a powerful solution for configuring, measuring, and optimizing AI applications. Read More ⇾
Take an inside look at how we built Statsig, and why we handle assignment the way we do. Read More ⇾
Learn the takeaways from Ron Kohavi's presentation at Significance Summit wherein he discussed the challenges of experimentation and how to overcome them. Read More ⇾
Learn how the iconic t-test adapts to real-world A/B testing challenges and discover when alternatives might deliver better results for your experiments. Read More ⇾
See how we’re making support faster, smarter, and more personal for every user by automating what we can, and leveraging real, human help from our engineers. Read More ⇾
Marketing platforms offer basic A/B testing, but their analysis tools fall short. Here's how Statsig helps you bridge the gap and unlock deeper insights. Read More ⇾