There’s a risk that you’ll buy the shirt, wait for it to be delivered, and then absolutely hate it and be stuck with a bad shirt.
But what if the shirt company offered free returns? This wouldn’t entirely eliminate the risk of you hating the shirt, but it does eliminate the risk that you’ll be stuck with it.
More of a reason to give the shirt a try, eh? Well, the same goes for feature gates:
Product builders have an idea they think will be great for users, but understand that there is a risk when launching a new feature.
Feature gates are rollouts of a feature to either 0% or 100% of users, making it easy for teams to turn off features if or when needed. These binary toggles act as a failsafe for risk or to make universal changes to an app or website (like turning off a feature causing app crashes or removing sale prices after Black Friday).
From getting to know our customers, I’ve noticed a trend for builders (especially Engineers) that feature management is extremely crucial to the workflow of how new features are shipped and monitored.
Ideally, every feature should be behind a feature gate and feature launches should be a partial rollout.
A partial rollout is the practice of showing a feature to a percentage of users that is between 0% and 100%. This allows product teams to see the impact a feature has on a sample of users without affecting ALL users.
This also helps to mitigate risk and confirm or reject builders’ hypotheses on the impact.
⏪ Rewind to the shirt analogy: Imagine buying the shirt, and then wearing it for some pre-work coffee with a friend. They tell you the shirt is a hideous monstrosity and should be immolated on the spot. “Whoever sold you that shirt deserves life in prison,” they say.
Yikes. While changing in your car, you realize there is a silver lining: You performed a partial rollout pertaining to your torso regalia. Your candid friend was the sample population, and your attire failed the experience test. Good thing you didn’t show the shirt to more users. Into the incinerator it goes…
This exact same thing can happen with web and mobile apps too: If teams partially rollout a feature and the app starts crashing, they can turn off that gate immediately, and have only ruined the sample population’s day, instead of all users.
And when I say ruined their day, I’m not exaggerating.
A partial rollout allows Statsig to measure the delta between the percentage of users with and without the feature, generating an A/B test that gives Pulse results. Statsig builders are shown reports based on the partial rollout: Red if the target metric was negatively impacted, green if positively impacted.
From those results, builders can make decisions on continuing to roll out the feature to more users, or simply killing it. Teams can use their own experience as well as the Statsig data to make better hypotheses and decisions.
Furthermore, Statsig suggests a 2/10/50/100 partial rollout cadence: Roll a feature out to 2% of all users, then 10%, then 50%, then 100%, measuring the impact on metrics each time.
P.S. I hope the shirt looks great! 😛
Since we started Statsig, we've been refining our software release process to ensure both shipping speed and reliability. Here's what we've learned along the way:
A semantic layer serves as a centralized translator, bridging the gap between the data storage and the data consumers while promoting data integrity.
With the launch of Metrics Explorer, we wanted to discuss how having an integrated platform for experiments and analytics empowers organizations to drive growth.
Watch the recording in which discussed everything from the journey it takes to become a product leader to how to develop a personal brand and public persona.
A/B testing serves to continually enhance product experiences and foster innovation—a process that is beneficial to all, even designers.
Metrics Explorer promises to redefine how you interact with your metrics by providing more analytics power directly within the Statsig platform.