Product Updates

We keep this changelog of product updates to catch our users up on anything they may have missed. To share a specific update, click the hash symbol (#) and then use that URL.
Statsig Product Updates
Arrow Left
Arrow Right

Analyze Product Metrics by Feature Gate Rule in Metrics Explorer

In January, we announced the ability to perform segment analysis based on Experiment Groups. Today, we're expanding that functionality to include Feature Gates as well. Try out this feature today by selecting a metric of interest, choosing a group by, and selecting "Experiments and Gates."

Group-by Feature Gate: Segmentation analysis is one of the most powerful tools product teams have when making targeted improvements to a product. Now, with the ability to group by Feature Gate, you can get a general sense of how a metric is performing for different Feature Gate rules, view the long-term effect of a feature, or monitor and debug the product performance of a feature before rolling it out broadly.

View a Sample of Events that Contribute to a Metric for a Given Feature Gate/Experiment in Metrics Explorer: When performing an analysis on an Experiment or Feature Gate, you can now switch from a Line chart to the "Samples" view, where you can see a sample of raw events. When grouped by an Experiment or Feature Gate, you can see a sample of events that affect your given metric, separated by the Feature Gate rule /Experiment Group the user was in. This is a great way of checking your experiment or feature roll out setup, or to gain a better sense of why specific groups are behaving in the way they are.

Group By Feature Gate


🚫 Read Only Metric Definitions in Console

Sync metrics from your Semantic Layer to Statsig as read-only. Users can view but not edit these metric definitions, ensuring version control and change management. This works well in tandem with Verified Metrics. (Learn more)

Read Only Metric

This feature is available on both Statsig versions - Cloud and Warehouse Native.


Slicing Experiment Results by User Dimensions from your Warehouse

Entity Properties are a Statsig Warehouse Native feature that let you slice experiment results by User Dimensions that come from your warehouse (e.g. User's Country, Subscription Status). This data can be time sensitive (for when experiments change this). Learn More.

User Dimensions


User Journeys Beta

We're thrilled to announce the beta release of User Journey charts in Metrics Explorer! These charts are designed to help you visualize and understand the most common paths users take through your product, starting from a specific point.

While it's common to envision a "golden path" through your product, users often take various routes. User Journeys provide insights into the actual paths taken, allowing you to see how users navigate through your product and identify areas where they drop off and may need improvement.

We've rolled out User Journeys in beta to most customers. We're eager to hear your feedback and refine this feature to make it an essential tool for optimizing user experience and streamlining product navigation. Explore User Journeys today and share your thoughts with us!

User Journeys Beta


👤 Anonymous -> User ID resolution

A common problem in experimentation is trying to connect different user identifiers before or after some event boundary, most frequently signups. Statsig Warehouse Native offers an easy solution for connecting identifiers across this boundary in a centralized and reproducible way.

Learn more


🤖 Statbot (in Console)

Statbot, our AI chatbot with knowledge from all our docs, is now accessible directly from Console. Previously used only in our Slack community, Statbot is now integrated into Console, allowing you to ask questions without switching platforms. You can access it from the "?" icon on the top-right corner.

Statbot in console


🕒 Scheduled Reloads

You can now configure default reload schedules for Experiment Results and Metrics and apply them to existing entities. You can continue to also just configure them on each entity.

Reload Config

This feature is relevant only to Statsig Warehouse Native.


✅ Verified Metrics

Enterprises often have a set of curated, centrally managed metrics in addition to team specific metrics. You can now mark the curated metrics as "verified" so experimenters can tell them apart.

Verified Metrics


Advanced Product Analytics with Event-Based Cohorts in Metrics Explorer

You can now perform detailed analysis on almost arbitrarily specific user segments with our new Event-Based Cohorts feature in Metrics Explorer. Event based cohorts allow you to group users who performed certain events and share specific properties. You can specify the minimum, maximum, or exact number of times users in the cohort performed the given event, and specify the date range within which they performed it. You can also add multiple property filters to the cohort. This is useful in many scenarios:

  • Create multiple cohorts of interesting user segments and compare their product usage. You can add multiple cohorts to your group-by, and use it was a way to compare different segments of users. For example, you can use the Distributions chart to find the usage that represents the 90th percentile for some event/feature of interest, and then create a “power user” cohort in a Drilldown chart by setting the event frequency to that 90th percentile. You can then create an “all users” cohort and compare the two.

  • Filtering by a Cohort. Define an event based cohort and use it as a way to filter your analysis. For example dig into low engagement users by filtering you cohort who used a feature at most 1 time in the last month.

Get started with this new feature by going to Metrics Explorer (click on the Metrics tab in the left navigation menu), mousing over to the Group-By section and clicking “+” button and selecting “Compare Cohorts” to begin defining your cohort.

Event Based Cohorts


Analyze Product Metrics by Experiment Group in Metrics Explorer

One of the most valuable aspects of any analytics product is illuminating how your product is performing for different groups. This is useful for general product understanding (is some key product metric over-performing for one group of users vs another?), debugging (is some key perf metric spiking for a specific group), and detailed segment analysis (what’s going on for a specific product feature for macOS 14.1.0 users in Seattle?). Doing these type of analyses for users in different experiment groups hasn’t really been possible until now.

In our product analytics surface, Metrics Explorer, you can now select any metric and split the metric out by experiment group. This unlocks many powerful scenarios such as getting a general sense of how a metric is performing for different groups in experiment, viewing the long term effect of an experiment on different groups, or monitoring and debugging the performance of different experiment variants.

Try out this feature by navigating to Metrics Explorer and clicking on the “Metrics” tab in the navigation bar on the left. Select the metric you are interested in, add a “Group-By” and select “Experiment Group”. Now choose the experiment of interest and see how the metric performance varies between groups in an experiment. You can do all the analysis you expect from Metrics Explorer like adding property filters, changing views (stacked lines, bar charts, etc), or scoping to a specific event based cohort.

Group-By Experiment Group
Metric broken out by experiment groups

Join the #1 Community for Product Experimentation

Connect with like-minded product leaders, data scientists, and engineers to share the latest in product experimentation.

Try Statsig Today

Get started for free. Add your whole team!

What our customers say about us

“Statsig gives Tavour best in class feature flag management combined with unparalleled A/B testing.”
Philip Vaughn
“Statsig has enabled us to quickly understand the impact of the features we ship.”
Shannon Priem
Lead PM
“Statsig's experimentation platform enables both speed and learning for us.”
Mengying Li
Data Science Manager
We use cookies to ensure you get the best experience on our website.
Privacy Policy