Product Updates

We ship fast to help you ship faster
Arrow Left
Arrow Right
Akin Olugbade
Product Manager, Statsig
1/22/2026
Permalink ›

🔐 Clear, Flexible Privacy for Session Replay

Overview

We introduced a clearer, more flexible set of privacy controls for Session Replay. You can choose a baseline privacy configuration and refine it with element-level rules. This makes it easier to align replay collection with your organization’s privacy requirements while preserving useful context for analysis.

What You Can Do Now

  • Select one of three baseline privacy options that define how text and inputs are handled by default.

  • Apply CSS selector rules to mask, unmask, or fully block specific elements.

  • Manage all replay privacy settings from a single place in the console.

How It Works

You begin by choosing a baseline privacy option in the Statsig Console UI. This sets the default masking behavior for all session replays.

Baseline privacy options:

  • Passwords (Default): Only password inputs are replaced with asterisks (*). All other text and inputs are shown as is.

  • Inputs: All text inside input fields is replaced with asterisks (*). All other text is shown as is.

  • Maximum: All text and all inputs are replaced with asterisks (*).

After selecting a baseline, you can add CSS selector rules to override it for specific elements. Selector rules follow a strict precedence order: Block, then Mask, then Unmask. Password inputs are always masked and cannot be unmasked.

Blocking removes an element entirely from the replay and replaces it with a black placeholder of the same size. Masking replaces text with asterisks. Unmasking reveals text that would otherwise be masked by the baseline setting.

All settings are configured in the Statsig Console under Project Settings → Analytics & Session Replay and require project admin permissions.

Impact on Your Analysis

These controls let you confidently use Session Replay in privacy-sensitive environments. You can protect PII by default while selectively revealing safe UI elements for debugging, without sacrificing visibility into user behavior.

Akin Olugbade
Product Manager, Statsig
1/22/2026
Permalink ›

⏱️ Background Queries in Metrics Explorer

Longer-running queries no longer block your workflow. Metrics Explorer now supports background queries, giving you a dedicated experience for queries that take longer to complete.

What You Can Do Now

  • Kick off queries that continue running even after you close the tab or switch tasks.

  • Avoid timeouts for complex or large queries.

  • Track queries that are still running from a central, visible place.

How It Works

When Metrics Explorer detects that a query will exceed typical execution time, it automatically runs it in the background. You are free to navigate away or close the browser without interrupting execution. In-progress background queries appear in Metrics Explorer under the breadcrumb menu at the top, where their status is clearly labeled.

Impact on Your Analysis

You can confidently run heavier queries without worrying about timeouts or keeping a tab open. This makes it easier to explore larger datasets, iterate on complex funnels, and parallelize analysis with other work.

Kaz Haruna
Product Manager, Statsig
1/16/2026
Permalink ›

📏 Count Distinct Metric in Cloud

We’re excited to announce the Count Distinct metric type for Statsig Cloud. Count Distinct lets you measure unique entities—like transactions, devices, or pages views—by counting unique values across the experiment window.

The Count Distinct metric is a sketch based metric that relies on probabilistic HyperLogLog++ (HLL++). We chose this approximation method to optimize for efficiency and speed. Visit our docs page to learn more about our implementation.

It’s available starting today.

Kaz Haruna
Product Manager, Statsig
1/13/2026
Permalink ›

🗺️ Explore Multi-Dimension Group By

For Statsig WHN customers, you can now explore your experiment results in greater detail by grouping the results by up to three user properties.

custom query multi dimensional group by

With Multi-Dimensional Group By, you can break down results by combinations like country × device × plan all in one view.

This expands your ability to find interesting insights and data trends at a greater granularity.   

Try it now in the Explore tab of your experiments.

Kaz Haruna
Product Manager, Statsig
12/19/2025
Permalink ›

🚀 AI-Powered Experiment Summary

We know writing an experimentation report is everyone’s favorite activity. For those who don’t, it just got easier with Statsig.

With AI-Powered Experiment Summaries, Statsig automatically turns your experiment data into a clear, human-readable summary. The AI summary works to understand the purpose of the experiment, make a ship recommendation, and highlight the most relevant metric observations.

ai_summary

This feature is available now for all Statsig customers. To enable, go to Setting → Statsig AI → AI Summary.

12/19/2025
Permalink ›

🕵️ AI Knowledge Bank Search [Beta]

For our WHN customers, we’re excited to announce the beta launch of this AI-powered search capability.

With AI-Powered Search, you can now search your repository of experiments using natural language. Try asking questions like “find me experiments that impacted retention the most” or “have we tested using AI for support”. The new search feature will then return three of the best-matching experiments to your question.

ai_search

If you’re interested in becoming an early tester of this feature, please reach out to us via Slack or through your account manager!

Shubham Singhal
Product Manager, Statsig
12/17/2025
Permalink ›

Contextual AI Descriptions

Bring your code context to automatically generate human-readable descriptions for feature gates, experiments, metrics, and events. Pre-requisite: This feature is powered by Statsig's new Github AI Integration.

Why this is valuable

We have observed that many Statsig users have empty descriptions for their entities (feature gates, metrics, experiments, etc.) in Statsig. This is despite users knowing that good descriptions are super useful in understanding the purpose of any entity.

Statsig AI can understand the meaning behind each feature gate, experiment, event, and metric from the code references that power these entities. As a result:

  • Anyone viewing a gate or experiment can quickly understand what it does and why it exists

  • In Metrics Explorer, users can see the semantic meaning of events and metrics, not just raw names

This dramatically improves self-serve understanding for PMs, engineers, and new team members.

How this works

  • If you empty description, Statsig will automatically pre-fill description fields with AI-generated context.

  • If you already have a description, Statsig will show an AI suggestion that can be more richer in context.

Shubham Singhal
Product Manager, Statsig
12/17/2025
Permalink ›

Github AI Integration

We’ve launched a new GitHub AI Integration that connects Statsig directly to your codebase. This is a foundational capability that powers a growing set of AI features across Statsig console.

github-ai

Why this matters

Through Github connection, Statsig understands where flags, experiments, and metrics live in your code. We then build a Knowledge Graph which maps relationship between code and Statsig entities. Once GitHub is connected, Statsig becomes "code-aware." This unlocks workflows that weren’t possible before, tying product insights directly back to the code that shipped them:

  1. Contextual Descriptions: AI-generated descriptions that understand code context behind each metric, flag, and experiment. Quickly understand what each entity in Statsig does and why it exists.

  2. Stale Gate Cleanup: One-click workflow to generate PRs to remove stale feature flags directly from Statsig console

  3. Metric Explorer Co-pilot (Coming Soon): Describe what you want to analyze in plain English and Statsig creates a chart with the right events, metrics, and breakdowns.

Getting started

To use this integration, navigate to Settings -> Integrations -> Github App page in your Statsig console. Authorize with your Github credentials and install the app on your desired repos. Please contact your Github Org admin to install Statsig App to use the above features! Read more here: https://docs.statsig.com/integrations/github-ai-integration

Shubham Singhal
Product Manager, Statsig
12/17/2025
Permalink ›

AI Stale Gate Cleanup

Until now, Statsig only detected feature gates that were no longer active and marked them "stale." With new Github AI Integration, you can directly generate a pull request to remove the dead code from Statsig UI in one click.

Why This Is Valuable

Cleaning up dead flags is usually painful and gets deprioritized. This turns it into a one-click workflow:

  • Click “remove from code”

  • Review the generated PR

  • Approve and merge

Teams reduce flag debt without risky manual cleanup.

Getting Started

Connect Statsig to your Github Org account to enable AI-powered stale gate code removal.

Akin Olugbade
Product Manager, Statsig
12/16/2025
Permalink ›

🧩 Segment Filters on Dashboards

We’ve expanded Global Dashboard Filters so you can filter a dashboard using ID List based Segments. This is an additional filter option. Existing global filtering (property filters and other criteria) continues to work the same way.

What You Can Do Now

  • Apply a Segment filter at the dashboard level where the segment is defined by an ID list

  • Combine an ID List Segment filter with your existing global property filters

  • Keep every chart on the dashboard scoped to the same audience without reapplying filters chart-by-chart

How It Works

  • Create or select an ID List based Segment (a segment defined by a fixed list of IDs, like user IDs, account IDs, or device IDs)

  • In your dashboard’s Global Dashboard Filters, choose that segment as a filter

  • The segment filter applies to all charts on the dashboard, alongside any other global filters you’ve set

Example: set the global filter to the segment “Enterprise accounts (ID list)” to ensure every chart reflects only those accounts.

Impact on Your Analysis

  • Use dashboards to answer questions about a specific, known set of users or accounts (for example, a customer list, beta cohort, or internal test group)

  • Reduce chart-to-chart inconsistencies caused by manually recreating the same ID-based audience filter

  • Iterate faster when you need to swap the audience across the entire dashboard (for example, compare two different customer lists)

Loved by customers at every stage of growth

See what our users have to say about building with Statsig
OpenAI
"Statsig's experimentation capabilities stand apart from other platforms we've evaluated. The ease of use, simplicity of integration help us efficiently get insight from every experiment we run. Statsig's infrastructure and experimentation workflows have also been crucial in helping us scale to hundreds of experiments across hundreds of millions of users."
Paul Ellwood
Head of Data Engineering
SoundCloud
"We evaluated Optimizely, LaunchDarkly, Split, and Eppo, but ultimately selected Statsig due to its comprehensive end-to-end integration. We wanted a complete solution rather than a partial one, including everything from the stats engine to data ingestion."
Don Browning
SVP, Data & Platform Engineering
Whatnot
"Excited to bring Statsig to Whatnot! We finally found a product that moves just as fast as we do and have been super impressed with how closely our teams collaborate."
Rami Khalaf
Product Engineering Manager
"Statsig has enabled us to quickly understand the impact of the features we ship."
Shannon Priem
Lead PM
Ancestry
"I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig."
Partha Sarathi
Director of Engineering
"Working with the Statsig team feels like we're working with a team within our own company."
Jeff To
Engineering Manager
"[Statsig] enables shipping software 10x faster, each feature can be in production from day 0 and no big bang releases are needed."
Matteo Hertel
Founder
OpenAI
"Statsig has been an amazing collaborator as we've scaled. Our product and engineering team have worked on everything from advanced release management to custom workflows to new experimentation features. The Statsig team is fast and incredibly focused on customer needs - mirroring OpenAI so much that they feel like an extension of our team."
Chris Beaumont
Data Scientist
"The ability to easily slice test results by different dimensions has enabled Product Managers to self-serve and uncover valuable insights."
Preethi Ramani
Chief Product Officer
"We decreased our average time to decision made for A/B tests by 7 days compared to our in-house platform."
Berengere Pohr
Team Lead - Experimentation
"Statsig is a powerful tool for experimentation that helped us go from 0 to 1."
Brooks Taylor
Data Science Lead
"We've processed over a billion events in the past year and gained amazing insights about our users using Statsig's analytics."
Ahmed Muneeb
Co-founder & CTO
SoundCloud
"Leveraging experimentation with Statsig helped us reach profitability for the first time in our 16-year history."
Zachary Zaranka
Director of Product
"Statsig enabled us to test our ideas rather than rely on guesswork. This unlocked new learnings and wins for the team."
David Sepulveda
Head of Data
Brex
"Brex's mission is to help businesses move fast. Statsig is now helping our engineers move fast. It has been a game changer to automate the manual lift typical to running experiments and has helped product teams ship the right features to their users quickly."
Karandeep Anand
President
Ancestry
"We only had so many analysts. Statsig provided the necessary tools to remove the bottleneck. I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig."
Partha Sarathi
Director of Engineering
Recroom
"Statsig has been a game changer for how we combine product development and A/B testing. It's made it a breeze to implement experiments with complex targeting logic and feel confident that we're getting back trusted results. It's the first commercially available A/B testing tool that feels like it was built by people who really get product experimentation."
Joel Witten
Head of Data
"We realized that Statsig was investing in the right areas that will benefit us in the long-term."
Omar Guenena
Engineering Manager
"Having a dedicated Slack channel and support was really helpful for ramping up quickly."
Michael Sheldon
Head of Data
"Statsig takes away all the pre-work of doing experiments. It's really easy to setup, also it does all the analysis."
Elaine Tiburske
Data Scientist
"We thought we didn't have the resources for an A/B testing framework, but Statsig made it achievable for a small team."
Paul Frazee
CTO
"We use Statsig's analytics to bring rigor to the decision-making process across every team at Wizehire."
Nick Carneiro
CTO
Notion
"We've successfully launched over 600 features behind Statsig feature flags, enabling us to ship at an impressive pace with confidence."
Wendy Jiao
Staff Software Engineer
"We chose Statsig because it offers a complete solution, from basic gradual rollouts to advanced experimentation techniques."
Carlos Augusto Zorrilla
Product Analytics Lead
"We have around 25 dashboards that have been built in Statsig, with about a third being built by non-technical stakeholders."
Alessio Maffeis
Engineering Manager
"Statsig beats any other tool in the market. Experimentation serves as the gateway to gaining a deeper understanding of our customers."
Toney Wen
Co-founder & CTO
"We finally had a tool we could rely on, and which enabled us to gather data intelligently."
Michael Koch
Engineering Manager
Notion
"At Notion, we're continuously learning what our users value and want every team to run experiments to learn more. It's also critical to maintain speed as a habit. Statsig's experimentation platform enables both this speed and learning for us."
Mengying Li
Data Science Manager
OpenAI
"At OpenAI, we want to iterate as fast as possible. Statsig enables us to grow, scale, and learn efficiently. Integrating experimentation with product analytics and feature flagging has been crucial for quickly understanding and addressing our users' top priorities."
Dave Cummings
Engineering Manager, ChatGPT
OpenAI
"Statsig has helped accelerate the speed at which we release new features. It enables us to launch new features quickly & turn every release into an A/B test."
Andy Glover
Engineer
"We knew upon seeing Statsig's user interface that it was something a lot of teams could use."
Laura Spencer
Chief of Staff
"The beauty is that Statsig allows us to both run experiments, but also track the impact of feature releases."
Evelina Achilli
Product Growth Manager
"Statsig is my most recommended product for PMs."
Erez Naveh
VP of Product
"Statsig helps us identify where we can have the most impact and quickly iterate on those areas."
John Lahr
Growth Product Manager
Whatnot
"With Warehouse Native, we add things on the fly, so if you mess up something during set up, there aren't any consequences."
Jared Bauman
Engineering Manager - Core ML
"In my decades of experience working with vendors, Statsig is one of the best."
Laura Spencer
Technical Program Manager
"Statsig is a one-stop shop for product, engineering, and data teams to come together."
Duncan Wang
Manager - Data Analytics & Experimentation
Whatnot
"Engineers started to realize: I can measure the magnitude of change in user behavior that happened because of something I did!"
Todd Rudak
Director, Data Science & Product Analytics
"For every feature we launch, Statsig saves us about 3-5 days of extra work."
Rafael Blay
Data Scientist
"I appreciate how easy it is to set up experiments and have all our business metrics in one place."
Paulo Mann
Senior Product Manager
We use cookies to ensure you get the best experience on our website.
Privacy Policy