Manage your Statsig configurations in the same programs that provision your cloud infrastructure.
With the Pulumi Statsig Provider, everything ships through a single version-controlled, reviewable workflow. This unifies progressive delivery with infrastructure as code. You get safer rollouts, automated drift detection, and built-in observability across infrastructure and product logic.
Visit our docs to get started. Or check us out in the Pulumi docs.
The new Table View makes it easier to compare how different groups perform across multiple metrics and time periods, all in a single table. Each metric becomes a column, and each group (based on your group-by selection) becomes a row. No need to flip between charts or tabs.
What You Can Do Now:
Compare multiple metrics side by side across user or event groups
View how the same group performs across different time periods
Add group-bys to see per-group metric values in one view
How It Works:
Select metrics to display as columns
Add a group-by to generate one row per group value
Toggle time comparisons to populate the table with values from both current and past periods
Impact on Your Analysis:
Quickly spot which segments are over- or under-performing across several metrics
Easily assess how group performance changes over time
Simplifies complex comparisons that previously required multiple charts
Use Data Table View when you want a clear, compact summary of group-level performance across metrics and time.
Statsig now hosts an MCP (Model Context Protocol) Server. This acts as a bridge between AI applications (clients) and Statsig: it is essentially a smart adapter that translates AI requests into commands that Statsig can understand.
For example - you can connect this in Cursor and request it in English to :
Make changes to your app to put features behind gates in Statsig
Instrument your app to log user interaction events - which can then be analyzed in Statsig
Perform operations like removing unused gates from your codebase - with Cursor directly pulling context from your Statsig project
You can connect it with Claude, and then ask questions based on data from Statsig:
Which experiments have been abandoned?
What are some suggestions for new growth experiments I can run?
Or read technical docs here.
WHN customers can now view historical scorecard results for their experiments. The system captures a Scorecard Snapshot each time results are calculated through a full or incremental reload.
Open the reload history dialog box in the Results tab for your experiment.
Find the data load instance you want to view and select "View Results Snapshot."
Geotest Experiments are now available to all our Warehouse Native customers, unlocking experimentation when traditional A/B testing doesn’t work. Commonly in marketing campaigns users cannot be reliably split into control and treatment.
With Statsig’s Geotesting, you can measure marketing incrementally in the core business metrics already in your warehouse. Using best-in-industry Synthetic Control methodology, Statsig makes it easy for every team to design and run statistically-rigorous tests using simple geographical controls like postal codes and DMAs.
Visit our docs to learn more and get started!
With Statsig’s Braze integration, running experiments across your multi-channel campaign or orchestrating your user journey just got easier!
Customers can now send exposure events from Statsig to Braze that can then be used to assign users into Segments in Braze. This enables you to trigger custom content through feature flags or run different campaigns based on whether users are in treatment or control groups.
Learn more here to get started!
Delta Comparison condenses two overlaid series into a single line that plots the percent change between your current and comparison periods at each timestamp, letting you see differences at a glance.
In Metric Drilldown, toggle Percent Difference to turn any chart in Comparison mode into a single line showing the percent change at each timestamp.
Scan spikes or dips instantly instead of comparing two stacked series.
Export or share the delta series just like any other chart.
In Metric Drilldown when in a time series view, choose Compare and select a comparison range.
Select the "%" option at the top of the chart to switch to Percent Difference mode.
The chart redraws as one series where delta_percent = (current - comparison) / comparison * 100
Switch back anytime to the traditional overlaid view.
Faster insights: One clear line highlights variance without visual clutter.
Sharpened focus: Positive vs. negative swings stand out immediately, making root-cause checks quicker.
Tighter reports: A single series is easier to share in dashboards and slide decks.
Try the Delta Comparison toggle on your next period-over-period chart and see the difference.
Count distinct values for any property across events or users, no longer limited to IDs.
Select Unique Values as the aggregation type in Metric Drilldown.
Answer questions like “How many different referrers drove traffic last week?” or “How many SKUs were added to carts today?”
Combine with filters and group-bys to surface granular uniqueness counts in one step.
Pick your metric or event.
In the aggregation dropdown, choose Unique Values.
Select the property whose distinct values you want counted (e.g., referrer, sku, country).
The chart returns the count of unique values for that property over the chosen time range and granularity.
Broader coverage: Distinct-value analysis now works on any property, not just user_id, stable_id, or company_id.
Faster answers: Skip custom SQL or exports when you need unique counts on the fly.
Try the Unique Values option to see diversity in your data at a glance.
By setting up a Decision Framework for an experiment template, teams can standardize how experiment results are interpreted and launch decisions are made.
Decision Frameworks can be added to any experiment template. Based on different scenarios of Primary and Guardrail metric outcomes, you can configure recommended actions: Roll Out Winning Group, Discuss, or Do Not Roll Out.
Once configured, any experiment created from the corresponding template will display a recommendation message in the Make Decision button when the experiment concludes. Reviewer can be set up when a shipping decision doesn’t align with the recommendations configured in the decision framework.
Learn how to setup your decision framework here.
We're streamlining the Experiment Setup page layout! It now includes a TEST button containing helpful resources to validate your experiment setup.
The Advanced Settings section has also been reorganized into Analysis Configuration and Experiment Population categories, with enhanced documentation links for users wanting to learn more about each feature.