Statsig does not support sticky results for A/B tests based on IP address. The primary identifiers used for consistency in experiments are the User ID and the Stable ID. The User ID is used for signed-in users, ensuring consistency across different platforms like mobile and desktop. For anonymous users, a Stable ID is generated and stored in local storage.
While the IP address can be included in the user object, it's not used as a primary identifier for experiments. The main reason is that multiple users might share the same IP address (for example, users on the same network), and a single user might have different IP addresses (for example, when they connect from different networks). Therefore, using the IP address for sticky results in A/B tests could lead to inconsistent experiences for users.
If you want to maintain consistency across a user's devices, you might consider using a method to identify the user across these devices, such as a sign-in system, and then use the User ID for your experiments.
For scenarios where users revisit the site multiple times without logging in, there are two potential options:
1. Run the test sequentially, only control at first, then only test group. This is known as Switchback testing. You can learn more about it in this blog post and the technical documentation.
2. Offer a way to switch between control/test group visually for the user so they can bounce back to the behavior they'd expect from being on another device.
However, if there's a lengthy effect duration, Switchback may not be ideal. If you are able to infer the IP address, you can use this as the user identifier (maybe even as a custom identifier) and randomize on this. But be aware that skew in the number of users per IP address may introduce a significant amount of noise. You may want to exclude certain IP addresses from the experiment to get around this.
The skew comes from IP addresses that represent dozens if not hundreds of users. This can skew some of the stats when we try to infer confidence intervals. For example, instead of a conversion rate of 0/1, or 1/1, this metric looks like 36/85. This overweights both the numerator and denominator for this "user" which can skew the results.
It is technically possible for two different web projects to share the same Statsig API keys. However, it is generally recommended to create separate Statsig projects for distinct websites with their own userIDs and metrics. This approach aids in managing each product independently. If you aim to track success across multiple websites, you may want to manage them in the same project. The decision ultimately depends on your specific use case and goals.
As for the impact on billing, it would depend on your usage. Statsig's pricing is based on Monthly Active Users (MAUs), which are unique users that interact with Statsig in a calendar month, regardless of how many API keys are used. If the same users are interacting with both projects, it would not increase your MAUs. However, if different sets of users are interacting with each project, it could potentially increase your MAUs.
When considering whether to create/use a new Statsig project, it's important to understand when it's appropriate to do so. You can refer to the guidance provided in the Statsig documentation. If you decide to create a new project, remember that the API keys are unique per project.
In conclusion, while it's technically possible to share API keys between projects, it's generally better to have separate keys for each project for easier debugging and management. The impact on billing is based on the number of unique users interacting with Statsig, not the number of API keys used.
Dynamic Config usage does not count towards the 1M free metered events.
When it comes to the propagation of changes in Dynamic Config, it is officially stated as "near real-time". While there is no precise time frame, changes are typically reflected in the services within 30 seconds of updating, based on anecdotal evidence. However, this is not a guaranteed service level agreement (SLA).
For server SDKs such as nodeJS, the update should happen automatically. Calls to getConfig will start returning the new value quickly. For client SDKs, updates do not occur in the middle of a session to maintain a consistent user experience. If you believe forcing a refresh is necessary, you might have to call Statsig.initialize again.
Please note that the above information is based on expert observations and official documentation, which can be found here.
Statsig does indeed support experimenting with different elements such as email subject lines. You can create an experiment in Statsig and define multiple variants, each with a different subject line. This is essentially an A/B/n test where 'n' represents the number of different subject lines you want to test.
As for the use of multi-armed bandits or other selection algorithms to rotate through a pool of copy, Statsig does support multi-armed bandit tests. However, it's not explicitly stated in the documentation if this can be applied to rotating through a pool of copy.
You can use Experiments or Autotune in your email campaigns. Autotune is Statsig's implementation of the multi-armed bandit approach. For more information on Autotune, you can refer to the Autotune documentation.
For a practical example of how to use Experiments or Autotune in email campaigns, you can refer to this walkthrough guide.
For more specific guidance on your use case, it would be best to reach out to the Statsig team directly. They can provide more detailed information and help you set up your experiment in the most effective way.
Statsig provides the capability to target experiments based on user properties, which can include actions users take within an application. When a user performs an action, such as clicking a button, this information can be passed to Statsig as a user property. This property can then be used as a targeting criterion for experiments or feature gates.
To implement this, developers can utilize a 'custom field' as described in the Statsig documentation. This field can be set up to reflect user actions or attributes, enabling real-time targeting based on these criteria.
It is important to note that Statsig operates on the properties of the user that are passed to it, and while it does not store the state of a user, it can act upon the properties provided. For instance, if a 'page_url' property is passed, it can be used to target users who land on a specific page.
Similarly, if an action is taken by the user, such as a button click, this can be communicated to Statsig and used for targeting. For best practices, it is advisable to map different events as different custom fields to avoid overwriting and ensure precise targeting.
For more details on setting up custom fields for targeting, refer to the Statsig documentation on Custom Fields.
To conduct Quality Assurance (QA) for your experiment while another experiment is active on the same page with an identical layer ID, you can use two methods:
1. Creating a New Layer: You can create a new layer for the new experiment. Layers allow you to run multiple landing page experiments without needing to update the code on the website for each experiment. When you run experiments as part of a layer, you should update the script to specify the layerid
instead of expid
. Here's an example of how to do this:
html <script src="https://cdn.jsdelivr.net/npm/statsig-landing-page-exp?apikey=[API_KEY]&layerid=[LAYER_NAME]"></script>
By creating a new layer for your new experiment, you can ensure that the two experiments do not interfere with each other. This way, you can conduct QA for your new experiment without affecting the currently active experiment.
2. Using Overrides: For pure QA, you can use overrides to get users into the experiences of your new experiment in that layer. Overrides take total precedence over what experiment a user would have been allocated to, what group the user would have received, or if the user would get no experiment experience because it is not started yet. You can override either individual user IDs or a larger group of users. The only caveat is a given userID will only be overridden into one experiment group per layer. For more information, refer to the Statsig Overrides Documentation.
When you actually want to run the experiment on real users, you will need to find some way to get allocation for it. This could involve concluding the other experiment or lowering its allocation.
If you need to change the owner of your account and upgrade your tier, but the previous owner has left the company, you can reach out to our support team for assistance. Please email them at support@statsig.com from an account that has administrative privileges.
Our support team can help you change the owner of your account and upgrade your tier. To do this, you will need to provide the email of the person you would like to change the owner to.
Please note that this process requires backend changes, which our support team can handle for you. Ensure that you have the necessary permissions and information before reaching out to the support team.
To measure the cumulative impact of different product teams' work, you can create separate holdouts for each team. Here's how you can do it:
1. Navigate to the Holdouts section on the Statsig console.
2. Click the "Create New" button and enter the name and description of the holdout that you want to create for the first team.
3. Select a Holdout size in terms of a percentage of all users. A small holdout percentage, typically between 1% and 5%, is recommended.
4. If there are any existing features that are already gated by the first team, you can select those gates at the bottom to make sure they respect the holdout moving forward.
5. Repeat the process to create a separate holdout for the second team.
Remember, you should not make these holdouts global. Instead, each team should add their gates/experiments/etc to their appropriate holdout as they create them. If there are gates/features that should be attributed to both teams, you can add that gate to both holdouts - and it will just keep out the union of their holdout audiences.
The percentage you set your holdout at will depend on how many users you expect to be impacted by the changes of each team. General guidance is in the neighborhood of 1-5%, but you can use the power analysis calculator to generate some intuition if you’re already logging those KPI metrics.
For further information, you can refer to the following resources:
- Getting in on Holdouts - Statsig Holdouts Documentation - Power Analysis Calculator
In Statsig, there is no hard limit to the number of dynamic configs you can create. However, the number of configs can have practical implications, particularly on the response size and latency.
Having a large number of dynamic configs can impact the initialization for both server and client SDKs. For Server SDKs, they will have to download every single config and all of their payloads during initialization, and on each polling interval if there’s an update available. This won't necessarily impact user experience, but it does mean large payloads being downloaded and stored in memory on your servers. You can find more information on Server SDKs here.
On the other hand, Client SDKs, where 'assignment' takes place on Statsig’s servers by default, will have to download user-applicable configs and their payloads to the user’s device during initialization. This increases the initialization latency and could potentially impact user experience. More details on Client SDKs can be found here.
In conclusion, while there is no explicit limit to the number of dynamic configs, having a large number can increase complexity and affect performance due to the increased payload size and latency. Therefore, it's important to consider these factors when creating and managing your dynamic configs in Statsig.
At present, Statsig does not offer a direct integration with Sentry for forwarding events. However, there are alternative methods available for event tracking and forwarding.
Statsig supports a wide range of data connectors and integrations, including Segment, Snowflake, Amplitude, Bugsnag, Fivetran, Google Analytics, Heap, Mixpanel, RevenueCat, mParticle, RudderStack, and a generic Webhook.
If you're using a service that we don't have an official integration for, you can use our Generic Webhook integration. This integration sends raw events to the provided webhook URL.
For those using Sentry, the recommended approach is to use Segment (if available) or a generic webhook. More details on how to use the generic webhook can be found in our documentation.
Please note that we are continuously expanding our range of integrations and Sentry could be considered for future development. We will provide updates on this as and when available.
The Statsig JavaScript SDK is designed to be as lightweight as possible while supporting as many browsers as possible. The primary feature that our SDK relies on, which may not be supported by all browsers, is a JavaScript Promise. You may wish to polyfill a Promise library to ensure maximum browser compatibility. We recommend taylorhakes/promise-polyfill for its small size and compatibility.
Please note that the SDK has not been tested on Internet Explorer. Microsoft is retiring IE11 in June 2022. For more detailed information, you can refer to the Statsig JavaScript Client SDK documentation.
For more specific compatibility details, you can refer to the Statsig Compatibility page. This page provides detailed information about the compatibility of Statsig with different versions of browsers like Chrome.
As for latency information, unfortunately, this conversation does not provide any details. Please refer to the official Statsig documentation or support for more information.
In a B2B context, the recommended way to roll out a feature customer by customer is by using feature gates. You can create a feature gate and add a list of customer IDs to the conditions of the gate. This way, only the customers in the list will pass the gate and have access to the feature, while the rest will go to the default behavior.
Here's an example of how you can do this:
const user = { userID: '12345', email: '12345@gmail.com', ... }; const showNewDesign = await Statsig.checkGate(user, 'new_homepage_design'); if (showNewDesign) { // New feature code here } else { // Default behavior code here }
In this example, 'new_homepage_design' is the feature gate, and '12345' is the customer ID. You can replace these with your own feature gate and customer IDs.On the other hand, Dynamic Configs are more suitable when you want to send a different set of values (strings, numbers, etc.) to your clients based on specific user attributes, such as country.
Remember to follow best practices for feature gates, such as managing for ease and maintainability, selecting the gating decision point, and focusing on one feature per gate.
Alternatively, you can target your feature by CustomerID. You could either use a Custom Field and then pass a custom field to the SDK {custom: {customer: xyz123} or create a new Unit Type of customerID and then target by Unit ID. For more information on creating a new Unit Type, refer to the Statsig documentation.
When an experiment is in the "Unstarted" state, the code will revert to the 'default values' in the code. This refers to the parameter you pass to our get
calls as documented here.
You have the option to enable an Experiment in lower environments such as staging or development, by toggling it on in those environments prior to starting it in Production. This allows you to test and adjust the experiment as needed before it goes live.
Remember, the status of the experiment is determined by whether the "Start" button has been clicked. If it hasn't, the experiment remains in the "Unstarted" state, allowing you to review and modify the experiment's configuration as needed.