Platform

Developers

Resources

Pricing

Faster Pulse, Environments in Overrides, Experiment Duration by Exposures

Statsig Product Updates
Arrow Left
Arrow Right
4/28/2023
#

Happy Friday, Statsig Community! To cap off a beautiful week here in Seattle ☀️, we have a number of exciting launch updates to share:

🕒 Fast(er) Pulse

Todate, when you launch a new feature roll-out or experiment, you have to wait 24 hours to start seeing your Pulse results. Today, we’re very excited to shorten that time significantly with the launch of more real-time Pulse. Now, you will see Pulse results start to flow through within 10-15 minutes of starting your roll-out or experiment.

faster pulse 2

A few things to consider-

  • For the first 24 hours, results do not include confidence intervals; early metric lifts are meant to help you ensure that things are looking roughly as expected and verify the configuration of your gate/ experiment, NOT make any launch decisions

  • The Pulse hovercard view will look a bit different; time-series and top-line impact estimates will not be available until the first 24-hour daily lift calculation

☁️ Environments in Overrides

At some companies, an user may have a different ID in different environments and hence want to specify the environment to override a given ID in. To enable this, we’ve added the ability to specify target environment for Overrides in Experiments. For Gates, you can achieve this via creating an environment-specific rule.

environments in overrides

⌛ Experiment Duration by # Target Exposures

(vs. Strictly Time Duration)

We’re introducing more flexibility into how you can measure & track experiment target duration. Now, you can choose between setting a target # of days or a target # of exposures an experiment needs to hit before a decision can be made.

experiment duration

To configure a target # of exposures, tap “Advanced Settings” in Experiment Setup tab, then under “Experiment Measured In” select “Exposures” (vs. “Days”). The progress tracker at the top of your experiment will now show progress against hitting target number of exposures.

experiment duration 2

See our docs for more details.


Join the #1 Community for Product Experimentation

Connect with like-minded product leaders, data scientists, and engineers to share the latest in product experimentation.

Try Statsig Today

Get started for free. Add your whole team!

What builders love about us

OpenAI OpenAI
Brex Brex
Notion Notion
SoundCloud SoundCloud
Ancestry Ancestry
At OpenAI, we want to iterate as fast as possible. Statsig enables us to grow, scale, and learn efficiently. Integrating experimentation with product analytics and feature flagging has been crucial for quickly understanding and addressing our users' top priorities.
OpenAI
Dave Cummings
Engineering Manager, ChatGPT
Brex's mission is to help businesses move fast. Statsig is now helping our engineers move fast. It has been a game changer to automate the manual lift typical to running experiments and has helped product teams ship the right features to their users quickly.
Brex
Karandeep Anand
CPO
At Notion, we're continuously learning what our users value and want every team to run experiments to learn more. It’s also critical to maintain speed as a habit. Statsig's experimentation platform enables both this speed and learning for us.
Notion
Mengying Li
Data Science Manager
We evaluated Optimizely, LaunchDarkly, Split, and Eppo, but ultimately selected Statsig due to its comprehensive end-to-end integration. We wanted a complete solution rather than a partial one, including everything from the stats engine to data ingestion.
SoundCloud
Don Browning
SVP, Data & Platform Engineering
We only had so many analysts. Statsig provided the necessary tools to remove the bottleneck. I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig.
Ancestry
Partha Sarathi
Director of Engineering
We use cookies to ensure you get the best experience on our website.
Privacy Policy