We kicked off the new year with our first virtual meetup hosted by John Wilke with guests Craig Sexauer and Pierre Estephan. This new format offers insight into how the Statsig team thinks about implementing experimentation into project plans for anyone who considers themselves a builder. If you weren’t able to join us live, catch the on-demand conversation below!
Product teams are responsible for scaling to achieve aggressive annual KPI targets, often with limited resources. Tune in to get an inside look at the importance of scaling and building with experimentation to hit demanding product targets.
Statsig Data Scientist Craig Sexauer and Engineering Manager Pierre Estephan covered:
How to pick the “right” metrics
Why measuring all product changes matter
Why smaller, scrappy experiments are crucial in H2 to prioritize product features
What insights long-term Holdouts offer that a single experiment does not
Enjoy this on-demand viewing and we hope you can join us live in the future! Morgan Scalzo Community and Event Manager Statsig
AI technology has been here for years, but the new wave of AI products and features is game-changing. We covered this, and other topics, at the Seattle AI Meetup.
Building a culture of experimentation benefits greatly from things like reviewing experiments regularly and discussing the results.
Thanks to our support team, our customers can feel like Statsig is a part of their org and not just a software vendor. We want our customers to know that we're here for them.
Migrating experimentation platforms is a chance to cleanse tech debt, streamline workflows, define ownership, promote democratization of testing, educate teams, and more.
Calculating the right sample size means balancing the level of precision desired, the anticipated effect size, the statistical power of the experiment, and more.
The term 'recency bias' has been all over the statistics and data analysis world, stealthily skewing our interpretation of patterns and trends.