What is the semantic layer? How is it managed? How do data and engineering professionals get a handle on constantly evolving data at organizations like Facebook, Spotify, and Airbnb?
How do you know which features to double down on or to remove? Is providing higher-quality video to users always the best thing to do? How does changing default settings impact user consumption? What the heck is an “event?”
Cloud data warehouse leader Firebolt kicked off season 2 of their podcast series The Data Engineering Show with a bang by tackling these questions and more with Statsig founder and CEO Vijaye Raji.
Hosted by Eldad Farkash and Benjamin Wagner (aka The Firebolt Data Bros), the latest episode titled "Making Observability a Key Business Driver” sets the stage with Vijaye’s technical background before diving deep into personal takeaways since starting Statsig in February 2021 and how the product observability platform Statsig is building drives positive business outcomes.
(09:00) - If you’re just starting to learn more about Statsig and are new to the world of A/B testing and experimentation, word to the wise that around the 9-minute mark, Vijaye gives a very straightforward explanation of 1) why having a hypothesis for your business before building a product matters and 2) how building and shipping products without measuring their success is a losing approach.
“Once of the most important things is, when you’re building a product you have a hypothesis and you think that something you’re building is going to be good for your users, your customers or your business. And that hypothesis is baked into every single feature that you’re building. And the idea that we build those things out and them ship them to people without really measuring whether that particular belief is true, is no longer valid. It’s no longer okay.”
(16:50) - If you’re familiar with A/B testing but the first thing you think of is changing a button color or increasing customer conversion, this conversation presents deeper examples at the 16:50 mark that go further than the valid but more typical testing that are more obvious.
(28:20) - Finally, if you think observability tools are still optional, Eldad’s first-hand experience with product observability tools’ impact on engineers “day job” moving from a black box to helping drive the business is emphasized as a huge step forward starting at the 28:20 mark. Experimentation starts as a feature flag, but evolves to become part of the engineering culture.
I won’t spoil the ending but with 25+ SDKs, 15-20 billion events per day, and a “gigantic Databricks job” every hour covered, the last ~15 minutes are an absolute treasure trove of information for anyone curious about experimentation and product observability; whether your background is technical or non-technical.
AI technology has been here for years, but the new wave of AI products and features is game-changing. We covered this, and other topics, at the Seattle AI Meetup.
Building a culture of experimentation benefits greatly from things like reviewing experiments regularly and discussing the results.
Thanks to our support team, our customers can feel like Statsig is a part of their org and not just a software vendor. We want our customers to know that we're here for them.
Migrating experimentation platforms is a chance to cleanse tech debt, streamline workflows, define ownership, promote democratization of testing, educate teams, and more.
Calculating the right sample size means balancing the level of precision desired, the anticipated effect size, the statistical power of the experiment, and more.
The term 'recency bias' has been all over the statistics and data analysis world, stealthily skewing our interpretation of patterns and trends.