Weâre adding the ability to log-transform sum and count metrics, and measure the average change to unit-level logged values.
Log Transforms are useful when you want to understand if user behavior has generally changed. If a metric is very head-driven, even with winsorization and CUPED the metric movement will generally driven by power users.
Logs are multiplicative, so a user going from spending $1.00 to spending $1.10 is the same âmetric liftâ as another going from $100 to $110. This means that what they measure is closer to shifts in relative distribution, rather than topline value.
Because of this divorce from âbusiness value,â log metrics are usually not good evaluation criteria for ship decisions, but alongside evaluation metrics, they can easily provide rich context on the change in the distribution of your population.
By default, the transform is the Natural Log, but you can specify a custom base if desired.
Learn more in our documentation.
We launched latest value metrics for user Statuses in March this year, and just extended support to numerical metrics. This will be useful for teams that want to track how experiments impact the âstateâ of their userbase.
You could already track subscription status, but now you can track usersâ current balance, lifetime spend, or LTV - without duplicating the data across multiple different days. Each day in the pulse time-series will reflect the latest value as of that day.
Learn more in our documentation.
Cohort analysis is now supported across all chart types in Metrics Explorer. Previously available only in drilldown charts, this feature allows you to filter your analysis to specific user cohorts or compare how different groups perform against various metrics.
Filtering to an interesting cohort is supported across all chart types and can be accomplished by adding a single cohort to your analysis. Cohort comparison is available in metric drilldown, funnel, and retention charts and can be accomplished by adding multiple cohorts to your analysis.
Whatâs New
Expanded Support: Cohort filtering is now integrated into funnels, retention charts, user journeys, and distribution charts.
Detailed Comparisons: You can compare how different cohorts, such as casual users and power users, navigate through funnels like the add-to-cart flow.
Focused Analysis: Easily scope your analysis to understand how specific user groups perform, helping you identify patterns and behaviors unique to each cohort.
Expanded support for cohort analysis will begin rolling out today.
Weâve added group-by functionality to retention charts, enabling you to break down your retention analysis by various properties and gain deeper insights into user behavior. This feature allows you to segment your retention data across event properties, user properties, feature gate groups, and experiment variants.
Group-By in retention charts is available for:
Event and User Properties: Break down retention based on event and user properties such as location, company or different context about an event or feature..
Feature Gate Groups: Understand retention among different user groups gated by feature flags.
Experiment Variants: Compare retention across experiment groups to see how different variants impact user retention.
Expanded support for group-by in retention charts is rolling out today.
Funnels in Metrics Explorer now complete in half the time. This improvement reduces wait times, allowing you to spend less time waiting and more time analyzing your data. With faster results, you can iterate more quickly, explore user behaviors efficiently, and make timely, data-driven decisions.
Weâve tripled the maximum number of steps allowed in funnels from 5 to 15. This change allows you to build more detailed funnels that capture longer and more complex user journeys. With up to 15 steps, you can analyze extended sequences of user actions, gain deeper insights into user behavior, and identify opportunities to optimize each stage of your funnel.
Weâve updated how Statsig processes events received from Segment to help you gain deeper insights without additional effort on your part. Now, when you send events from Segment into Statsig, we automatically extract and include extra properties such as UTM parameters, referrer information, and page details like URL, title, search parameters, and path.
By leveraging data youâre already collecting with Segment, you can:
Gain More Value Without Extra Work: Utilize the enriched data immediately, increasing the context available for your analysis without any additional implementation.
Analyze Marketing Campaigns More Effectively: Filter events by specific UTM parameters to assess which marketing campaigns drive the most engagement or conversions.
Understand User Acquisition Channels: Use referrer information to see where your users are coming from, helping you optimize outreach and partnerships.
Dive Deeper into User Behavior: Examine page-level details to understand how users interact with different parts of your site or app, allowing you to identify areas that perform well or need improvement.
These improvements make it easier to perform detailed analyses in Metrics Explorer, enabling you to make informed decisions based on comprehensive event dataâall from the data youâre already sending through Segment.
We have introduced Cohort Analysis to our funnel feature, allowing you to filter your funnel analysis to specific cohorts or compare how different cohorts progress through the same funnel.
Filter Funnels by Cohort
You can now focus your funnel analysis on specific user cohorts. This means you can examine how particular groupsâlike new users, users from a specific marketing campaign, or users who have completed a certain actionânavigate through your funnels. Filtering by cohort helps you identify unique behaviors and patterns within these groups, enabling you to tailor your strategies to improve their experience.
Compare Conversion Across Cohorts
In addition to filtering, you can compare how different cohorts convert across the same funnel. This comparative view lets you see how various segments of your user base perform relative to each other. For example, you might compare first-time users to returning users, users from different geographic regions, or users acquired during different time periods. Understanding these differences can inform targeted improvements and highlight areas where certain cohorts may need additional support.
We're excited to announce a powerful new addition to Statsig's feature management capabilities: the Cross-Environment Feature Gate View. This new view provides DevOps teams, SREs, and Release Managers with unprecedented visibility into feature gate states across all environments from a single, unified interface.
Comprehensive grid view showing all feature gates and their states across Dev, Staging, and Production environments
At-a-glance status indicators and gate checks for quick state verification
Simplified Operations: Eliminate the need to navigate between different environments to check gate states
Enhanced Release Management: Quickly verify feature gate configurations across your deployment pipeline
Improved Collaboration: Give platform teams and operations staff the high-level view they need for effective feature management
Risk Reduction: Easily spot inconsistencies in gate states across environments before they issue becomes significant
You can turn this view on by clicking on the top-right toggle in the feature gates list page. Ready to get started? Let us know if you have any feedback on this feature for us.
DevOps teams, SREs, and Release Managers, rejoice đ
— Statsig (@statsig) October 11, 2024
Today, weâre enhancing our feature management capabilities by introducing a powerful cross-environment feature gate view đ
Product Manager Shubham demonstrates how you can easily track all your feature gates across⊠pic.twitter.com/4OF1IOKaE9
Weâre reaching out to give you a heads-up about an important change we are making to the auto-generated event_dau
metric for Cloud customers in the Statsig Console.
Note: Customers on Statsig Warehouse Native will not be impacted.
In two weeks, from Wednesday, October 16 2024 onwards we plan to stop auto-generating new event_dau
metrics for incoming events in Statsig. We will continue to auto-generate an event_count
metric for each logged event as we do today.
Any existing event_dau
metrics that have been used in a gate, experiment, dashboard, or other Custom Metrics will NOT be affected by this change.
Existing event_dau
metrics that have been archived or not been used in another config will NO longer exist in the project. See âNext stepsâ below if you want to retain the unused metrics.
Going forward, new event_dau
metrics will need to be created manually as a Custom Metric. See this guide to learn how to create a DAU metric.
We will be making this change on October 16, 2024. If you have any questions or concerns, please donât hesitate to reach out!
Historically, we have automatically generated an event_count
and event_dau
metric for every incoming event into Statsig. After working closely with hundreds of customers, we have seen that auto generating two metrics for every event leads to confusion and clutter inside Statsig projects. The proposed change will lead to cleaner Metrics Catalog and faster Console performance, while still retaining your ability to create event_dau
metrics for the events you care about most.
If you wish to keep any unused event_dau
metrics going forward, you can earmark that metric by performing any of the below actions:
Adding a Tag (RECOMMENDED)
Adding a description
Referencing in a gate/experiment/dashboard
These actions will mark your unused metric as active, signaling us that you donât want them to be deprecated.