Product Updates

Statsig Product Updates
Arrow Left
Arrow Right
12/10/2024
Permalink ›

Vineeth Madhusudanan

Product Manager, Statsig

Storytelling with Experiments

Good experimenters who bring others along, are often good storytellers. We've heard from many of our customers that they want to offer a narrative arc with experiments that includes the ability to set up context, dive into conflict and offer resolution - a narrative layer around our out-of-the-box Scorecards.

Today we are adding the ability to include live, interactive widgets in Experiment Summary that allow experimenters to craft this narrative for their audience.

A few examples of how customers are using this:

  • Embedding the results of a Custom Explore Query to add context from relevant deep dives (e.g. analyzing the experiment time period while removing outliers like Black Friday, or specific days that had data blips)

  • Adding rich charts such as the conversion funnel being experimented on to contextualize experiment Scorecards

  • Breaking out metrics to match the mental model for good AI experimenters, for example:

    • Direct model measurement : latency, tokens produced, cost

    • Direct user feedback : explicit - thumbs up/thumbs down, implicit - dwell time, regeneration requested

    • Longer-term effects : user activity level over the next week, retention, subscriptions

The original experiment Scorecard is still available as part of Experiment Summary.

Our eventual goal is to have all the context around an experiment - including experiment design, critique, q&a and readouts- centralized in one place. This is the first step toward that.

This feature is rolling out gradually. To embed rich charts in your Experiment Summary, go to the "Summary" tab in your experiment, tap into "Experiment Notes" and select the "+ Add Charts" CTA on the right. Happy storytelling!

image

12/10/2024
Permalink ›

Tim Chan

Lead Data Scientist, Statsig

We have enhanced CUPED for ratio metrics by jointly adjusting both group means and variances. Previously, only variances were adjusted. With this update, group means are now adjusted using deducted values, ensuring more accurate results. This improvement reduces the false discovery rate, making CUPED even more reliable.

For more details, head to our Docs!

CUPED for Ratio Metrics

12/9/2024
Permalink ›

Brock Lumbard

Product Manager, Statsig

📡 IPV6 Support

Starting today (Dec 9th, 2024) Statsig will begin support for auto-resolving metadata from IPV6 domains on our client SDKs. Statsig has historically provided and used our own package (IP3Country) for resolution of IP addresses to country codes, which we've decided to stop relying on as IPV6 traffic continues to grow. Going forward, we'll leverage our load balancer's country resolution, which will provide more accurate IPV4 support, and fulsome IPV6 support.

Visit our docs on the transition for more info, or reach out to us in Slack!


11/30/2024
Permalink ›

Akin Olugbade

Product Manager, Statsig

Horizontal Bar Charts for Grouped Data

Your data tells clearer stories when you can see how different groups stack up. We've added horizontal bar charts in Metrics Explorer to make these comparisons easy and intuitive.

What You Can Do Now

  • Compare metrics across any business dimension (time periods, segments, categories)

  • Track usage patterns by user type, location, or platform

  • Spot trends in any grouped data, from engagement to transactions

How It Works Apply a Group By to your data, and select the horizontal bar chart option. The chart automatically adjusts to show your groups clearly.

Impact on Your Analysis This visualization makes it simple to:

  • Identify your top and bottom performers instantly

  • Handle longer label names easily

  • Share clear comparisons in your reports

Start turning your grouped data into visual insights today.

horizontal-bar-chart

11/30/2024
Permalink ›

Vineeth Madhusudanan

Product Manager, Statsig

Warehouse Native Experimentation on Athena

Athena is now a supported data warehouse for Warehouse Native Experimentation! We've unlocked the same capabilities available on Snowflake, BigQuery, Redshift and Databricks to Athena users too.

You can reuse existing events and metrics from Athena in experimental analysis. You can also use typical Statsig features - including Incremental reloads (to manage costs), Power Analysis using historic data, and even use features like Entity Properties to join categorical information about users and use them in analysis across experiments!

image

11/26/2024
Permalink ›

Vineeth Madhusudanan

Product Manager, Statsig

Autoscaling on Snowflake (Warehouse Native)

You can now connect multiple Snowflake warehouses to your account, enabling better query performance by automatically distributing query jobs across all available warehouses. To set it up, you can head over to Settings > Project > Data Connection, and select Set up additional Warehouses.

When you schedule multiple experiments to be loaded at the same time, Statsig will distribute these queries across the provided warehouses to reduce contention. Spreading queries across compute clusters can often be faster and cheaper(!) when contention causes queries to be backed up.

We have a beta of intelligent Autoscaling in works. Reach out in Slack if you'd like to try it!

image

11/26/2024
Permalink ›

Brock Lumbard

Product Manager, Statsig

🅰️ Angular Support

Statsig's Javascript SDK now has out-of-the-box support for Angular with the release of our Angular bindings. While we've long helped customers setup Angular in our Slack Community, this release includes bindings and suggested patterns for both App Config and App Module integrations. Along with some angular-specific features like directives, this supports all of the bells and whistles you expect from Statsig's SDKs: feature flags, experiments, event logging, and more. Try it out, and let us know if you find any wrinkles as we roll out support. Get started with our Angular docs or simply run: npm install @statsig/angular-bindings

angular

11/26/2024
Permalink ›

Vineeth Madhusudanan

Product Manager, Statsig

Experiment Compute Summary

Following up from the Statsig project level compute summary, we've also added an experiment level compute summary - available in Experiment Diagnostics. Out of box it lets you look at compute utilization by job type or metric source. This is helpful to isolate situations where a low value metric is a disproportionate share of compute utilization. When you find this, look at our guide to optimize costs.

image

11/20/2024
Permalink ›

Akin Olugbade

Product Manager, Statsig

👁️ Updated Single Value Views in Metric Drilldown and Dashboards

Use Case When you need a quick, at-a-glance summary of a key metric, having a single, prominent value can provide immediate insight. Whether you’re monitoring yesterday’s user sign-ups or the total revenue over the past month, a headline figure helps you stay informed without diving into detailed charts.

Why It’s Important

Single Value views allow you to focus on the most critical data points instantly. This feature is especially useful on dashboards, where quick visibility into key metrics supports faster decision-making and keeps your team aligned on important performance indicators.

The Feature: What It Does

You can now directly select Single Value as a widget type when adding items to your Dashboards, making it easier to showcase key metrics prominently without additional configuration.

In addition, within Metric Drilldown, you can choose the Single Value view to display your metric as a headline figure. This feature offers:

  • Latest Full Data Point: View the most recent complete data point (e.g., yesterday’s total sales or user activities).

  • Overall Value for Time Range: See the cumulative or average value over the entire selected time range, providing a broader perspective on your metric.

  • Comparison Options: Select a comparison period to see absolute and percentage changes over time, helping you understand trends and growth.

By incorporating Single Value views into your dashboards and analyses, you can highlight essential metrics at a glance, enabling you and your team to stay updated with minimal effort.

Single Value Widgets

11/13/2024
Permalink ›

Brock Lumbard

Product Manager, Statsig

🧐 SDK Observability Integrations

As Statsig comes to power your product's features, experiments, metrics and more, observing our SDK's performance can become increasingly important. Starting with our Python SDK, we've built an interface for you to consume your SDK's performance statistics and ingest them into your platform of choice (like Datadog).

Initial support will include the metrics:

  • statsig.sdk.initialization - which tracks SDK initialization duration

  • statsig.sdk.config_propagation_diff - which measures the difference between a config's update time and when it reaches your SDK, and

  • statsig.sdk.config_no_update - which tracks occurrences where there are no config updates.

If you'd prefer to not integrate with your SDKs, similar data is available in the Statsig console alongside your SDK Keys & Secrets. If you'd like to see support for this in another SDK, let us know in Slack!

observabilityClient

Join the #1 experimentation community

Connect with like-minded product leaders, data scientists, and engineers to share the latest in product experimentation.

Try Statsig Today

Get started for free. Add your whole team!

Why the best build with us

OpenAI OpenAI
Brex Brex
Notion Notion
SoundCloud SoundCloud
Ancestry Ancestry
At OpenAI, we want to iterate as fast as possible. Statsig enables us to grow, scale, and learn efficiently. Integrating experimentation with product analytics and feature flagging has been crucial for quickly understanding and addressing our users' top priorities.
OpenAI
Dave Cummings
Engineering Manager, ChatGPT
Brex's mission is to help businesses move fast. Statsig is now helping our engineers move fast. It has been a game changer to automate the manual lift typical to running experiments and has helped product teams ship the right features to their users quickly.
Brex
Karandeep Anand
President
At Notion, we're continuously learning what our users value and want every team to run experiments to learn more. It’s also critical to maintain speed as a habit. Statsig's experimentation platform enables both this speed and learning for us.
Notion
Mengying Li
Data Science Manager
We evaluated Optimizely, LaunchDarkly, Split, and Eppo, but ultimately selected Statsig due to its comprehensive end-to-end integration. We wanted a complete solution rather than a partial one, including everything from the stats engine to data ingestion.
SoundCloud
Don Browning
SVP, Data & Platform Engineering
We only had so many analysts. Statsig provided the necessary tools to remove the bottleneck. I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig.
Ancestry
Partha Sarathi
Director of Engineering
We use cookies to ensure you get the best experience on our website.
Privacy Policy