We move fast, to help you move fast

9/20/2023 #

New Experiment Scorecard (Pulse) Views

We’re starting to roll out a new way to visualize your Pulse metric lifts inline within your Scorecard.

You can now select whether you want to visualize your Pulse results in “Cumulative” view (default), “Daily” view, or “Days Since Exposure” view. You can easily toggle between views via a new toggle inline within your Pulse view controls.

pulse views in statsig

Check it out and let us know what you think, or read more deeply about Pulse in our docs here.

new ways to view pulse in statsig

9/15/2023 #

Metric Archival

To aid in keeping your Metrics Catalog streamlined and current, we are launching automated metric archival. Any metric that has been inactive for the last 60 days will be automatically scheduled for archival with the option for metric owners to extend or mark a metric as permanent.

🪦Metric Archival

8/29/2023 #

📝 Smart Scorecard Limits

Experimentation best practice dictates that an experiment should have a highly targeted set of metrics that you’re actively trying to move, along with a broader swath of metrics you’re monitoring to ensure you don’t regress.

Today, we’re adapting our Scorecard to reflect this best practice, and putting in place some smart limits on the Scorecard—max 10 Primary Metrics and 40 Secondary Metrics. Coming soon will be the ability for Enterprise customers to specify an even tighter limit on Scorecard metrics via their Org Settings if desired.

One bonus implication of these limits is that we’re auto-expanding tagged metric groups, making it even easier to see (and manage) all the individual metrics being added to your Scorecard when you add a metric tag.

Let us know if you have any feedback or questions on this change!

experiment scorecard limits product screenshot

8/29/2023 #

🚨Experiment Policy

We've just started rolling out Experiment Policy controls to customers with Enterprise contracts. Configure good defaults for experiment settings like Bayesian vs Frequentist, Confidence Intervals (or optionally even enforce them). Find it under Organization Settings ➜ Settings ➜ Experiment Settings

Experiment Policy

8/21/2023 #

👩‍💻 Github Code References

Quickly see where in your source code a feature gate or experiment is referenced to get context for how it is being used. Simply enable Github Code References to see this light up in! 

Github Code References

8/17/2023 #

  New & Improved Experiment Setup Checklist

Last week, we launched a refreshed version of the Experiment Setup checklist to make it easy for anyone on your team to configure experiments quickly and correctly in Statsig. In the new checklist, you’ll see -

  • Two top-line guides, “Set up your Experiment” & “Test your Experiment” - Skip straight to testing if you’re a pro or get more help with setup if you are newer to running experiments on Statsig.

  • Ability to test experiment setup in a specific environment - Turn your experiment on in lower environments to verify it’s working as expected before going live in Production.

  • Same Overrides controls - Leverage ID or Gate Overrides to test your experiment setup for a specific user or segment of users in any configured environment.

new experiment checklist

We’d love to hear feedback as you and your teams get up and running with the new checklist!

8/4/2023 #

Better Experiment Defaults

You've told us you want more trustworthy experiments, not just more experiments. We are making Hypothesis and Primary Metrics required on experiments. Enterprise customers will soon be able to define experiment settings as policy.

7/31/2023 #

Coming soon to Analytics : Custom retention reports

Retention Analysis helps you drive product adoption by showing you how often users return to your product after taking a specific action. People using Metrics Explorer this week will be opted into the beta early!

Retention Analysis

7/21/2023 #

Experiment on the edge with Fastly

We're excited to extend our ability to serve experiments at the edge with our new Fastly integration. Developers can now render web pages with no latency or flicker by putting flag evaluation and experiment assignment as close to their users as possible. We're taking advantage of Fastly Config Stores to light up this feature. See docs for Fastly (or Cloudflare and Vercel).


7/20/2023 #

Bayesian Analysis for Experiments

We now support Bayesian Analysis for Experiments. You can turn this on by selecting the option under Experiment Setup / Advanced Settings and see your results through the Bayesian lens, including statistics like Expectation of Loss and Chance of Beat.

You’ll be able to access this through the Experiment Setup / Advanced Settings tab. This is a philosophically different framework from standard AB testing based on frequentist analysis and there are many nuances to using it. For more information, please see the documentation here.

Bayesian Analysis

Try Statsig Today

Easy to start, simple to collaborate. It's for your whole team!

What our customers say about us

“We finally found a product that moves just as fast as we do and have been super impressed with how closely our teams collaborate.”
Rami Khalaf
Product Engineering Manager
“Statsig gives Tavour best in class feature flag management combined with unparalleled A/B testing.”
Philip Vaughn
“Statsig's experimentation platform enables both speed and learning for us.”
Mengying Li
Data Science Manager
We use cookies to ensure you get the best experience on our website.
Privacy Policy