Entity Properties are a Statsig Warehouse Native feature that let you slice experiment results by User Dimensions that come from your warehouse (e.g. User's Country, Subscription Status). This data can be time sensitive (for when experiments change this). Learn More.
We're thrilled to announce the beta release of User Journey charts in Metrics Explorer! These charts are designed to help you visualize and understand the most common paths users take through your product, starting from a specific point.
While it's common to envision a "golden path" through your product, users often take various routes. User Journeys provide insights into the actual paths taken, allowing you to see how users navigate through your product and identify areas where they drop off and may need improvement.
We've rolled out User Journeys in beta to most customers. We're eager to hear your feedback and refine this feature to make it an essential tool for optimizing user experience and streamlining product navigation. Explore User Journeys today and share your thoughts with us!
A common problem in experimentation is trying to connect different user identifiers before or after some event boundary, most frequently signups. Statsig Warehouse Native offers an easy solution for connecting identifiers across this boundary in a centralized and reproducible way.
Statbot, our AI chatbot with knowledge from all our docs, is now accessible directly from Console. Previously used only in our Slack community, Statbot is now integrated into Console, allowing you to ask questions without switching platforms. You can access it from the "?" icon on the top-right corner.
You can now configure default reload schedules for Experiment Results and Metrics and apply them to existing entities. You can continue to also just configure them on each entity.
This feature is relevant only to Statsig Warehouse Native.
Enterprises often have a set of curated, centrally managed metrics in addition to team specific metrics. You can now mark the curated metrics as "verified" so experimenters can tell them apart.
You can now perform detailed analysis on almost arbitrarily specific user segments with our new Event-Based Cohorts feature in Metrics Explorer. Event based cohorts allow you to group users who performed certain events and share specific properties. You can specify the minimum, maximum, or exact number of times users in the cohort performed the given event, and specify the date range within which they performed it. You can also add multiple property filters to the cohort. This is useful in many scenarios:
Create multiple cohorts of interesting user segments and compare their product usage. You can add multiple cohorts to your group-by, and use it was a way to compare different segments of users. For example, you can use the Distributions chart to find the usage that represents the 90th percentile for some event/feature of interest, and then create a âpower userâ cohort in a Drilldown chart by setting the event frequency to that 90th percentile. You can then create an âall usersâ cohort and compare the two.
Filtering by a Cohort. Define an event based cohort and use it as a way to filter your analysis. For example dig into low engagement users by filtering you cohort who used a feature at most 1 time in the last month.
Get started with this new feature by going to Metrics Explorer (click on the Metrics tab in the left navigation menu), mousing over to the Group-By section and clicking â+â button and selecting âCompare Cohortsâ to begin defining your cohort.
One of the most valuable aspects of any analytics product is illuminating how your product is performing for different groups. This is useful for general product understanding (is some key product metric over-performing for one group of users vs another?), debugging (is some key perf metric spiking for a specific group), and detailed segment analysis (whatâs going on for a specific product feature for macOS 14.1.0 users in Seattle?). Doing these type of analyses for users in different experiment groups hasnât really been possible until now.
In our product analytics surface, Metrics Explorer, you can now select any metric and split the metric out by experiment group. This unlocks many powerful scenarios such as getting a general sense of how a metric is performing for different groups in experiment, viewing the long term effect of an experiment on different groups, or monitoring and debugging the performance of different experiment variants.
Try out this feature by navigating to Metrics Explorer and clicking on the âMetricsâ tab in the navigation bar on the left. Select the metric you are interested in, add a âGroup-Byâ and select âExperiment Groupâ. Now choose the experiment of interest and see how the metric performance varies between groups in an experiment. You can do all the analysis you expect from Metrics Explorer like adding property filters, changing views (stacked lines, bar charts, etc), or scoping to a specific event based cohort.
Weâve started rolling out a new health check on experiments (gates coming soon) to help teams more easily catch any SDK configuration issues that may be impacting experiment assignment.
The new âGroup Assignment Healthâ check surfaces if there are a high percentage of checks with assignment reasons like "Uninitialized" or "InvalidBootstrap" which might indicate experiment assignment is not configured correctly. You can view an hourly breakdown of assignment reason via the View Assignment Reasons CTA.
To read more about what each assignment reason means and how to debug, see our docs here.
In prep for starting 2024 on the right foot, our team spent the last few weeks of 2023 cleaning up and polishing some of the most loved surfaces of the Statsig Console. We're excited to debut a set of shipped improvements to you today! đ
Here's what's changing:
Weâve given the Statsig Home Tab a facelift! A few of the changes weâve implemented:
Added a personalized âto-doâ list to the top of your feed, enabling you to easily catch up on all the items that need your attention in the Console
Moved the Velocity charts into a side panel; these are still accessible on-demand when you want to understand how your teamâs velocity is tracking, but arenât as in-your-face every time you log into Statsig
Made metrics tracker more flexible- now pin any tag (not just âCore) that youâre curious about tracking regularly to see those metrics pinned to your sidebar
You may have noticed your Left Nav is looking a little leaner these days- we moved two tabs (Holdouts and Autotune) into Experiments tab, alongside Experiments and Layers. As we continue to build new experimentation types, we will consolidate them here, under the umbrella âExperimentsâ tab.
Weâve unified the surfaces that your Account Settings, Project Settings, and Organization Settings live into one âSettingsâ tab, making admin-related tasks easier from one central spot in the Console.
As your teamâs library of metrics, experiments, and new feature launches grows on Statsig, being able to organize and easily find the entity you want at any given time is crucial. To make this even easier, weâve invested in leveling up our filter UX, improving discoverability and usability, as well as exposing operators such as âany ofâ and âall ofâ for fields like Tags.