Track users across the point where they go from anonymous to identified. Funnels can now connect actions taken before login with those that happen after, giving you a more complete view of conversion paths that span identity states.
Cross the anonymous-to-identified boundary in funnels
Connect pre-login behavior (e.g. browsing, adding to cart) with post-login events (e.g. checkout, onboarding completion)
Toggle ID resolution per funnel
Click the gear icon when editing a funnel to enable or disable ID resolution for that analysis
Configure identifiers
In Settings → Analytics & Session Replay, choose which identifiers represent anonymous vs. identified users. Defaults are Stable ID and User ID
When enabled, ID resolution stitches together events across anonymous and identified IDs if they’re seen on the same device. This turns fragmented journeys into a single user flow, even when a user logs in midway.
Example:
User views a product (Stable ID)
Signs up (User ID)
Completes checkout
With ID resolution on, these events are treated as a single funnel path.
Funnels that previously showed drop-off at login steps may now show full completion. You’ll see higher true conversion rates, more accurate attribution, and better insight into how anonymous traffic behaves before converting.
You can now create a simple sample app to try out Statsig with - we've partnered with SampleApp.ai to let you easily create one with only a single prompt. This might help you out with exploring how Statsig works if you're a marketer or non-technical persona, that would still like to see what Statsig might look like once integrated in your app.
On statsig.sampleapp.ai - just enter a prompt or pick one of the samples, and we'll create a simple website for you to play with feature gates, analytics, and more.
We're excited to announce the ability to add new custom sections and reorder sections in the experiment summary tab for greater customization of your experiment reporting.
These capabilities will also be available for experiment templates, giving you the ability to preconfigure summary sections to standardize formatting across your organization. These changes make the summary section a great place to store your experiment metadata like product research docs, links to design, or details on rollout plans.
You can now control exactly who gets recorded in Session Replay using new global and conditional targeting options. This gives you fine-grained control over session capture so you can focus on users who’ve opted in, track behavior behind feature gates, or limit recordings to specific actions or test groups.
Set a Global Targeting Gate
Define a global gate that determines which users are eligible for session recording. Only users who pass this gate can be recorded. This is useful for:
Recording only users who’ve opted in
Limiting capture to internal users
Scoping recordings to users who meet complex targeting conditions
Set a Global Sampling Rate
Define a global sample rate that determines what percent of sessions will be recorded by default.
This is useful if you want to record some percentage of all user sessions
Conditional triggers are not effected by the global sample rate, only the global targeting gate
Add Conditional Triggers with Custom Sampling Rates
You can define multiple recording triggers, each with its own sampling rate:
Event-based triggers: Start recording when a user triggers a specific event. Filtering on the event’s "Value" property is supported today, with more flexible event property filtering coming soon. This is great for focusing recordings on specific product scenarios.
Experiment-based triggers: Record users exposed to an experiment. You can narrow this to a specific variant to compare behavior across groups.
Feature gate–based triggers: Record users who pass a gate. Helpful for understanding how people interact with newly released features.
You can configure a Global Targeting Gate in your Session Replay settings. If set, only users who pass this gate will be considered for any recording.
Conditional triggers sit on top of this and define when recording should begin. For example, you might record 100% of users who trigger a critical event, 10% of users in a specific experiment variant, and 0% of users who don’t pass the global gate.
These controls let you capture the sessions that matter most while reducing noise. You can zero in on specific behaviors, test results, or user groups, stay compliant with data collection policies, and get more value out of your allotted replay quota by avoiding unnecessary recordings.
Focus your recordings where they count.
You can now treat experiment exposure events like any other event in Drilldown (time-series) and Funnel analyses. Exposure events include properties such as group, as well as user properties logged with the exposure. We currently only show first exposures to the experiment.
Pick exposure events in Drilldown charts to track how many users saw each variant over time.
Add exposure as the first step of a funnel to measure post-exposure conversion paths.
Group or filter by exposure properties, for example, break down results by variant, region, or device.
Overlay exposure counts with key metrics in Drilldown to check whether metric changes align with rollout timing.
Exposure logging
The first time a user is bucketed into an experiment, an exposure event is recorded with contextual properties.
Event selection
In both Drilldown and Funnel charts, exposure events appear in the same event picker you already use.
Property handling
Any custom fields travel with exposures, enabling the same group-by and filter controls available for other events.
Drilldown
Validate rollout health by confirming traffic splits and ramp curves over calendar time.
Catch logging issues early—spikes, gaps, or duplicates stand out immediately.
Align timing with metrics by viewing exposure and conversion lines on one chart.
Funnels
Measure post-exposure journeys starting the moment users see a variant.
Pinpoint variant-specific drop-offs by breaking down each step.
Ensure clean attribution because exposure proves the user entered the test.
Segment by exposure fields (e.g., region or device) to uncover cohort-level insights.
This feature is now available on Statsig Cloud and coming soon to Warehouse Native. Give it a try the next time you validate an experiment. Seeing exposure data side-by-side with core metrics speeds up debugging and sharpens your reads on variant performance.
Dashboards can now be automatically refreshed on a schedule with results cached for faster loading and a snappier experience.
Set a refresh frequency for each dashboard (e.g. hourly, daily)
Automatically cache results in the background
Open dashboards with results already loaded, no wait time
You can configure a refresh interval in the dashboard settings. To do this:
Navigate to your dashboard and click the settings cog ⚙️.
Scroll to "Schedule Dashboard Refresh" and set the interval.
Click Save
Once set, queries for that dashboard will run on the specified schedule and store the results. When someone opens the dashboard, they’ll see the most recent data instantly, instead of triggering fresh queries.
Dashboards load faster and stay up to date without manual effort. This is especially helpful for shared dashboards or recurring check-ins, where you want fresh data ready without delay.
Chart Annotations put experiment, gate, and config updates directly on your metric timeline. You see exactly when each change landed in your chart. No more hunting through logs or history.
To get started, open Metrics Explorer or Dashboards and toggle on "Show Annotations". Use the filter bar to pick which event markers you want to display. Your charts update with markers at the precise points of change.
Chart Annotations give instant context for every trend. Try it out today!
Log Explorer lets you diagnose issues quickly alongside your Statsig data. No more juggling tools or context switching.
Metrics point you to a change. Logs reveal the root cause.
Open any log entry to get started. Our point-and-click UI makes it easy for anyone to zero in on things like timestamp, service name, or metadata value. When you need more control, write queries from scratch using our flexible search.
Built-in OpenTelemetry support gets you up and running with minimal effort. No extra instrumentation required.
Try Log Explorer today!
You can now opt in to using Fieller Intervals when calculating % lift confidence intervals. This is a more accurate alternative to the Delta Method when calculating % lift confidence intervals.
Because Fieller Intervals are asymmetric, the scorecard display will look slightly different when this option is enabled:
You can set this up in your Experimentation Settings at the Organization Level.
Historical and ongoing experiments will not have their methodology changed midway through if you opt in, only newly created experiments after opt in are impacted.
Learn more about Fieller Intervals here!
We've added a way to onboard your codebase to Statsig without lifting a finger - with an AI-powered onboarding wizard in your command line. With the single command npx @statsig/wizard@latest on the command line, we'll add a functioning Statsig implementation to your Vite or Next.js app. The wizard makes calls to OpenAI to generate code changes that implement Statsig. Try it out today in your app - run npx @statsig/wizard@latest. If you'd like support in another framework, let us know in Slack!