Customers are often eager to leave their legacy platform behind and make the move over to Statsig. This can, however, feel like a daunting task with a lot of uncertainty. At Statsig, customer success is paramount and we aim to ensure that the migration process is well-understood.
In this section, we’ll help build the mental model of what a ‘platform migration’ means and define some of the activities therein.
What a migration is not:
A purely technical exercise
An automatable process that requires little planning and minimal human oversight (any vendor promising this has little experience or is lying 😅)
A lift-and-shift of everything in your old platform over to your new platform. This typically isn’t necessary.
A doomsday event that necessarily requires a hard “cut-off date” of the old platform (although this can simplify the process).
What a migration actually is:
An exercise of change management providing an opportunity to:
cleanse tech debt
define clear ownership
promote democratization of testing
educate teams to accelerate adoption
A process of taking inventory of the existing metrics and data sources available for measuring your tests and new features
A decision-making process to determine what entities (if any) need to be ported to your new platform. See the section “What actually gets moved over” below for more.
An engineering project to technically enable product owners and engineers to use the platform for various apps, and integrate necessary data sources
Connect with our data scientists and engineers, and ask questions or just hang out with other cool folks that believe in an experimentation culture!
The decision-making process doesn’t need to be overcomplicated! Here are how our customers typically determine what needs to be migrated.
Experiments are short-lived, so the idea of migrating them doesn't necessarily make sense. Any historical experiments and their results should be captured/documented internally for later reference. Any new experiments should be created in Statsig.
Feature-flag migration involves taking inventory of what is in place and determining which flags need to persist, or if it's best to simply scrub unneeded flags from the codebase to reduce tech debt and start fresh with Statsig gates.
There are generally 2 categories of flags customers have in their codebase:
Temporary flags that were used for a rollout, but are no longer used or needed. This category of flags could simply be scrubbed from the codebase when switching to Statsig
Permanent flags that are in place as a kill switch or a means for delivering targeted functionality based on user entitlement. These flags should be migrated to statsig gates.
Questions to ask on this topic include:
Is there some sort of wrapper or abstraction sitting on top of your existing flagging solution? This will make your life easier, whereby there are fewer reference points to your old tools, giving your engineers a more ‘centralized’ approach to implementing Statsig tools.
What is the volume of flags? Would it necessitate some automation, or is it manageable on an ad-hoc basis? Here is where we can script some automations and leverage Statsig’s Console API to create gates.
Coming over from LaunchDarkly? We have a migration tool for automating the copy of your flags to Statsig!
Metric Definitions and event tracking should be ported to Statsig. Statsig can readily support any existing data and analytics systems you may be using via our integrations with Data Warehouses, CDPs, analytics providers, and via our SDK & http APIs.
How are you currently measuring success signals for your tests and features? Which metrics (quantitative measure of user behavior) and events are needed immediately? (Think core metrics and upcoming testing roadmap).
Do you have vendor-specific SDK tracking calls throughout your app code? All Statsig SDKs support direct event logging.
Are you using a data collection or product analytics service to collect your events and conduct analysis? Statsig supports ingest integrations with the major analytics platforms, CDPs and ETL tools.
Do you have your events and computed metrics in your warehouse that you’d like to integrate into Statsig to measure your tests and gates? Statsig has data
ingest integrations with the major data warehouse providers to ingest raw data and also supports the ability to ingest your custom pre-computed metrics directly from the warehouse (docs).
✔️ Determine if there is a hard cutoff requirement for the incumbent platform. It’s possible that some teams may be more dependent on it, and will take them a bit more time to ramp off. Coordinate the switch-off/switch-on plan across testing teams.
✔️ Determine a suitable Statsig org/project structure based on needs to partition your test efforts by use-case or business unit. Typically, projects represent a shared set of testing objectives/surfaces/metrics. There is no one-size-fits-all solution for this and we’re happy to workshop org design with you.
✔️ Determine and port the necessary entities to Statsig based on the principles outlined in “What actually gets moved over” above.
✔️ Determine and document the typical targeting groups to whom you ship tests/features and map these to Statsig segments.
✔️ Determine how to best use your project management software to empower testing teams to collaborate with engineering teams (what is the ideal workflow for socializing test specs)
✔️ Turn off your legacy platform (eventually) … happy testing in Statsig 🧪 📈 💸
Did I miss something? Let me know and I’ll incorporate it here. 👏🏼
Statsig's experts are on standby to answer any questions about experimentation at your organization.
💡 Also, reach out to our Enterprise Engineering team to learn more about how we’ve successfully migrated some of our of largest customers and set them up for success.
Since we started Statsig, we've been refining our software release process to ensure both shipping speed and reliability. Here's what we've learned along the way:
A semantic layer serves as a centralized translator, bridging the gap between the data storage and the data consumers while promoting data integrity.
With the launch of Metrics Explorer, we wanted to discuss how having an integrated platform for experiments and analytics empowers organizations to drive growth.
Watch the recording in which discussed everything from the journey it takes to become a product leader to how to develop a personal brand and public persona.
A/B testing serves to continually enhance product experiences and foster innovation—a process that is beneficial to all, even designers.
Metrics Explorer promises to redefine how you interact with your metrics by providing more analytics power directly within the Statsig platform.