We introduced a clearer, more flexible set of privacy controls for Session Replay. You can choose a baseline privacy configuration and refine it with element-level rules. This makes it easier to align replay collection with your organizationâs privacy requirements while preserving useful context for analysis.
Select one of three baseline privacy options that define how text and inputs are handled by default.
Apply CSS selector rules to mask, unmask, or fully block specific elements.
Manage all replay privacy settings from a single place in the console.
You begin by choosing a baseline privacy option in the Statsig Console UI. This sets the default masking behavior for all session replays.
Baseline privacy options:
Passwords (Default): Only password inputs are replaced with asterisks (*). All other text and inputs are shown as is.
Inputs: All text inside input fields is replaced with asterisks (*). All other text is shown as is.
Maximum: All text and all inputs are replaced with asterisks (*).
After selecting a baseline, you can add CSS selector rules to override it for specific elements. Selector rules follow a strict precedence order: Block, then Mask, then Unmask. Password inputs are always masked and cannot be unmasked.
Blocking removes an element entirely from the replay and replaces it with a black placeholder of the same size. Masking replaces text with asterisks. Unmasking reveals text that would otherwise be masked by the baseline setting.
All settings are configured in the Statsig Console under Project Settings â Analytics & Session Replay and require project admin permissions.
These controls let you confidently use Session Replay in privacy-sensitive environments. You can protect PII by default while selectively revealing safe UI elements for debugging, without sacrificing visibility into user behavior.
Longer-running queries no longer block your workflow. Metrics Explorer now supports background queries, giving you a dedicated experience for queries that take longer to complete.
Kick off queries that continue running even after you close the tab or switch tasks.
Avoid timeouts for complex or large queries.
Track queries that are still running from a central, visible place.
When Metrics Explorer detects that a query will exceed typical execution time, it automatically runs it in the background. You are free to navigate away or close the browser without interrupting execution. In-progress background queries appear in Metrics Explorer under the breadcrumb menu at the top, where their status is clearly labeled.
You can confidently run heavier queries without worrying about timeouts or keeping a tab open. This makes it easier to explore larger datasets, iterate on complex funnels, and parallelize analysis with other work.
Weâre excited to announce the Count Distinct metric type for Statsig Cloud. Count Distinct lets you measure unique entitiesâlike transactions, devices, or pages viewsâby counting unique values across the experiment window.
The Count Distinct metric is a sketch based metric that relies on probabilistic HyperLogLog++ (HLL++). We chose this approximation method to optimize for efficiency and speed. Visit our docs page to learn more about our implementation.
Itâs available starting today.
For Statsig WHN customers, you can now explore your experiment results in greater detail by grouping the results by up to three user properties.

With Multi-Dimensional Group By, you can break down results by combinations like country Ă device Ă plan all in one view.
This expands your ability to find interesting insights and data trends at a greater granularity.  Â
Try it now in the Explore tab of your experiments.
We know writing an experimentation report is everyoneâs favorite activity. For those who donât, it just got easier with Statsig.
With AI-Powered Experiment Summaries, Statsig automatically turns your experiment data into a clear, human-readable summary. The AI summary works to understand the purpose of the experiment, make a ship recommendation, and highlight the most relevant metric observations.

This feature is available now for all Statsig customers. To enable, go to Setting â Statsig AI â AI Summary.
For our WHN customers, weâre excited to announce the beta launch of this AI-powered search capability.
With AI-Powered Search, you can now search your repository of experiments using natural language. Try asking questions like âfind me experiments that impacted retention the mostâ or âhave we tested using AI for supportâ. The new search feature will then return three of the best-matching experiments to your question.

If youâre interested in becoming an early tester of this feature, please reach out to us via Slack or through your account manager!
Bring your code context to automatically generate human-readable descriptions for feature gates, experiments, metrics, and events. Pre-requisite: This feature is powered by Statsig's new Github AI Integration.
We have observed that many Statsig users have empty descriptions for their entities (feature gates, metrics, experiments, etc.) in Statsig. This is despite users knowing that good descriptions are super useful in understanding the purpose of any entity.
Statsig AI can understand the meaning behind each feature gate, experiment, event, and metric from the code references that power these entities. As a result:
Anyone viewing a gate or experiment can quickly understand what it does and why it exists
In Metrics Explorer, users can see the semantic meaning of events and metrics, not just raw names
This dramatically improves self-serve understanding for PMs, engineers, and new team members.
If you empty description, Statsig will automatically pre-fill description fields with AI-generated context.
If you already have a description, Statsig will show an AI suggestion that can be more richer in context.
Weâve launched a new GitHub AI Integration that connects Statsig directly to your codebase. This is a foundational capability that powers a growing set of AI features across Statsig console.

Through Github connection, Statsig understands where flags, experiments, and metrics live in your code. We then build a Knowledge Graph which maps relationship between code and Statsig entities. Once GitHub is connected, Statsig becomes "code-aware." This unlocks workflows that werenât possible before, tying product insights directly back to the code that shipped them:
Contextual Descriptions: AI-generated descriptions that understand code context behind each metric, flag, and experiment. Quickly understand what each entity in Statsig does and why it exists.
Stale Gate Cleanup: One-click workflow to generate PRs to remove stale feature flags directly from Statsig console
Metric Explorer Co-pilot (Coming Soon): Describe what you want to analyze in plain English and Statsig creates a chart with the right events, metrics, and breakdowns.
To use this integration, navigate to Settings -> Integrations -> Github App page in your Statsig console. Authorize with your Github credentials and install the app on your desired repos. Please contact your Github Org admin to install Statsig App to use the above features! Read more here: https://docs.statsig.com/integrations/github-ai-integration
Until now, Statsig only detected feature gates that were no longer active and marked them "stale." With new Github AI Integration, you can directly generate a pull request to remove the dead code from Statsig UI in one click.
Cleaning up dead flags is usually painful and gets deprioritized. This turns it into a one-click workflow:
Click âremove from codeâ
Review the generated PR
Approve and merge
Teams reduce flag debt without risky manual cleanup.
Connect Statsig to your Github Org account to enable AI-powered stale gate code removal.
Weâve expanded Global Dashboard Filters so you can filter a dashboard using ID List based Segments. This is an additional filter option. Existing global filtering (property filters and other criteria) continues to work the same way.
Apply a Segment filter at the dashboard level where the segment is defined by an ID list
Combine an ID List Segment filter with your existing global property filters
Keep every chart on the dashboard scoped to the same audience without reapplying filters chart-by-chart
Create or select an ID List based Segment (a segment defined by a fixed list of IDs, like user IDs, account IDs, or device IDs)
In your dashboardâs Global Dashboard Filters, choose that segment as a filter
The segment filter applies to all charts on the dashboard, alongside any other global filters youâve set
Example: set the global filter to the segment âEnterprise accounts (ID list)â to ensure every chart reflects only those accounts.
Use dashboards to answer questions about a specific, known set of users or accounts (for example, a customer list, beta cohort, or internal test group)
Reduce chart-to-chart inconsistencies caused by manually recreating the same ID-based audience filter
Iterate faster when you need to swap the audience across the entire dashboard (for example, compare two different customer lists)