You can now compare conversion funnels across different time periods. Now, you can select a specific comparison range—1, 7, or 28 days ago—and view a side-by-side comparison of the entire funnel for each step against the selected time period.
This feature allows you to observe how product changes impact user behavior over time. By comparing different periods, you can easily identify trends, assess the effectiveness of recent changes, and make data-driven decisions to improve your funnel strategy.
Time period comparisons are available in all funnel views including Conversion Rates, Time to Convert, and Conversion Rate over Time.
You can now analyze distributions for any numerical property on an event, This removes the limitation of only allowing distribution analysis on the default “Value” property. This enhancement provides you with the flexibility to explore and visualize distributions across diverse numerical properties such as session length, purchase amounts, or any numerical property associated with specific events.
This refinement allows for a comprehensive view of the distribution’s shape, going beyond specific percentiles like p90. This broader perspective is useful for identifying significant points within the distribution, helping you detect trends, pinpoint anomalies, and address potential issues more effectively.
We’ve made some quality of life improvements to the Time to Convert view in Funnel charts.
We now indicate where median time to convert
We support custom configuration for the conversion time window to examine. You can now adjust out automatically configured distribution chart by defining a time window to examine by bounding it with a minimum and maximum conversion time. You can also set the granularity of your analysis by selecting a interval size.
Together these quality of life improvements make it easier to better understand the distribution of times it takes to convert through funnels, and zoom in on specific areas in that distribution for more granular understanding.
We’re excited to announce a new feature that makes it easier to understand metrics in context. You can now view metrics broken down by (grouped-by) an event property, expressed as a percentage of the total metric value, available in both bar charts and time series line charts.
This update allows you to quickly gauge the proportionate impact of different segments or categories within your overall metrics. For instance, you can now see what percentage of total sales each product category represents over time, or what portion of total user sessions specific events constitute.
By presenting data in percentages, this feature simplifies comparative analysis and helps you focus on the relative significance of different data segments.
We're excited to launch Pause Assignment, a new decision type for experiments in Statsig. This feature allows you to halt new user enrollments in an experiment while continuing to analyze the results for previously exposed users.
Example Use-case
Pause Assignment offers flexibility in various scenarios. For instance, consider an e-commerce company running an experiment to measure the impact of discounts on repeat purchases. With a limited budget, you may need to cap the number of discount offers. However, measuring long-term effects on repeat purchases requires ongoing analysis for an extended period. Pause Assignment addresses this challenge by allowing you to stop user enrollment once you've reached your budget limit while maintaining result analysis to assess the impact on repeat purchases.
Implementation in Statsig
To implement Pause Assignment, simply select it from the Make Decision dropdown in your experiment interface. Note: this feature requires Persistent Assignment to be configured.
For detailed information on Pause Assignment, please consult our Statsig Docs. We value your input and encourage you to share your feedback and suggestions for future enhancements.
Statsig users can now turn on/off their email notifications in the Statsig console settings. Simply go to My Account page in the Project settings and update your preferences under the Notifications tab.
This is a especially useful for teams who are Statsig very frequently and might want to turn off specific categories of emails to manage their inbox.
We hope that this helps you reduce clutter in your inbox while still allowing you to stay on important aspects of your projects in Statsig. As always, we welcome your feedback and suggestions for further improvements.
A common problem in experimentation is trying to connect different user identifiers before or after some event boundary - most frequently signups. This often involves running an experiment where the unit of analysis is a logged out identifier, but the evaluation criteria for the experiment is a logged-in metric (e.g. subscription rate, or estimated lifetime value).
Statsig has upgraded our ID resolution solution to handle multiple IDs attached to one user. ID resolution. In addition to the strict 1-1 mapping that we already support, we now offer first-touch mapping to handle 1-many, many-1 and many-many mappings between different ID types. This is extremely flexible and enables use cases like:
handling logins across multiple devices
mapping users to metrics from different profiles or owned entities
For more information about this feature, check out the documentation. This option is available to all Statsig Warehouse Native experimenters!
We’ve introduced Global Dashboard Filters, a new feature that allows you to apply property filters across all charts on your dashboard at once. This makes it easier to scope your analysis to specific properties and other criteria, with the results reflected across every chart.
With Global Dashboard Filters, you can efficiently narrow down your analysis to explore insights from different angles, ensuring that each chart on your dashboard provides a consistent and relevant view of the data. This feature simplifies your workflow and helps you focus on the most important aspects of your analysis.
The KB acts as a searchable repository of experiment learning across teams. It helps you find shipped, healthy experiments and gain context on past effort and generate ideas on new things to try.
Make it easy for new team mates to explore and find experiments a team ran, or where a topic was mentioned. Our meta-analysis tools offer more structured means to discover and look across your experiment corpus, but when you do want free text search, this exists.
(also updates to Experiment Timeline view)
The "batting average" view lets you look at how easy or hard a metric is to move. You can filter to a set of shipped experiments and see how many experiments moved a metric by 1% vs 10%. Like with other meta-analysis views, you can filter down to a team, a tag or even if results were statistically significant.
Common ways to use this include
Sniff testing whether the claim that the next experiment will move this metric by 15% is a good idea.
Establishing reasonable goals, based on past ability to move this metric
This view now features summary stats (e.g. How many experiments shipped control) so you don't have to sit and manually tally stats here.