To manage rules for a dynamic config in Statsig, the Python SDK does not offer the functionality to set up these rules directly.
Instead, the management of dynamic config rules should be performed through the Statsig Console API. The Console API provides the capability to programmatically create and modify Dynamic configs, tests, and feature gates.
This allows for the automation of configurations and the integration of Statsig features into your development workflow. For detailed instructions on how to use the Console API, please refer to the official Statsig Console API documentation.
To measure the cumulative impact of different product teams' work, you can create separate holdouts for each team. Here's how you can do it:
1. Navigate to the Holdouts section on the Statsig console.
2. Click the "Create New" button and enter the name and description of the holdout that you want to create for the first team.
3. Select a Holdout size in terms of a percentage of all users. A small holdout percentage, typically between 1% and 5%, is recommended.
4. If there are any existing features that are already gated by the first team, you can select those gates at the bottom to make sure they respect the holdout moving forward.
5. Repeat the process to create a separate holdout for the second team.
Remember, you should not make these holdouts global. Instead, each team should add their gates/experiments/etc to their appropriate holdout as they create them. If there are gates/features that should be attributed to both teams, you can add that gate to both holdouts - and it will just keep out the union of their holdout audiences.
The percentage you set your holdout at will depend on how many users you expect to be impacted by the changes of each team. General guidance is in the neighborhood of 1-5%, but you can use the power analysis calculator to generate some intuition if you’re already logging those KPI metrics.
For further information, you can refer to the following resources:
- Getting in on Holdouts - Statsig Holdouts Documentation - Power Analysis Calculator
You can indeed send the environment information using the HTTP API in Statsig. The process involves logging an event with a custom environment. Here's an example of how to do this:
bash curl \ --header "statsig-api-key: <YOUR-SDK-KEY>" \ --header "Content-Type: application/json" \ --request POST \ --data '{"events": [{"user": { "userID": "42", "statsigEnvironment": {"tier": "staging"} }, "time": 1616826986211, "eventName": "test_api_event"}]}' \ "https://events.statsigapi.net/v1/log_event>"
In this example, the statsigEnvironment
field is included in the user object, and it contains a tier
field which is set to "staging". You can replace "staging" with your desired environment.For more information, you can refer to the Statsig HTTP API documentation.
When conducting multiple experiments, the decision to run them in the same layer versus different layers has significant implications.
Placing experiments in the same layer ensures that there is no overlap between participants in different experiments. This is beneficial for eliminating interaction effects between experiments, as no user will be part of more than one experiment at a time. However, a critical consideration is that using layers divides the user base, which can substantially reduce the experimental power and sample size.
This division of the user base means that, at a minimum, the number of participants in each experiment is halved. Consequently, this reduction can limit the number of experiments that can be conducted simultaneously and may prolong the duration required to achieve statistically significant results.
When experiments are run in a layer and thus have a smaller sample size, any effects observed while the experiment is running will also be smaller.
For a more in-depth discussion on the topic, including the trade-offs between isolating experiments and embracing overlapping A/B tests, refer to the article Embracing Overlapping A/B Tests and the Danger of Isolating Experiments.
In Statsig, you can use Dynamic Configs to send a different set of values (strings, numbers, etc.) to your clients based on specific user attributes. This is similar to Feature Gates, but you get an entire JSON object you can configure on the server and fetch typed parameters from it. Here's an example from the documentation:
var config = Statsig.GetConfig("awesome_product_details");
You can also use Layers/Experiments to run A/B/n experiments. We offer two APIs, but we recommend the use of layers to enable quicker iterations with parameter reuse.
However, if you're looking to dynamically set a series of flags for a user, you might need to replicate your current system using Statsig's rules. You can create rules based on user attributes and set the value of the flags accordingly.
Remember to provide a StatsigUser object whenever possible when initializing the SDK, passing as much information as possible in order to take advantage of advanced gate and config conditions.
If you're running tests in a full-stack environment and using the Ruby SDK on the server, you can override the flag value locally. Here's the relevant documentation: Local Overrides. This allows you to control the flag values based on your setup.
Currently, there is no admin Command Line Interface (CLI) or Software Development Kit (SDK) specifically designed for creating and configuring gates and experiments in Statsig. However, you can use the Statsig Console API for these tasks.
The Console API documentation provides detailed instructions and examples on how to use it. You can find the introduction to the Console API. For specific information on creating and configuring gates, refer to this link.
While there are no immediate plans to build a CLI for these tasks, the Console API documentation includes curl command examples that might be helpful for developers looking to automate these tasks.
Please note that the Statsig SDKs are primarily used for checking the status of gates and experiments, not for creating or configuring them.
The Statsig JavaScript SDK is designed to be as lightweight as possible while supporting as many browsers as possible. The primary feature that our SDK relies on, which may not be supported by all browsers, is a JavaScript Promise. You may wish to polyfill a Promise library to ensure maximum browser compatibility. We recommend taylorhakes/promise-polyfill for its small size and compatibility.
Please note that the SDK has not been tested on Internet Explorer. Microsoft is retiring IE11 in June 2022. For more detailed information, you can refer to the Statsig JavaScript Client SDK documentation.
For more specific compatibility details, you can refer to the Statsig Compatibility page. This page provides detailed information about the compatibility of Statsig with different versions of browsers like Chrome.
As for latency information, unfortunately, this conversation does not provide any details. Please refer to the official Statsig documentation or support for more information.
In a B2B context, the recommended way to roll out a feature customer by customer is by using feature gates. You can create a feature gate and add a list of customer IDs to the conditions of the gate. This way, only the customers in the list will pass the gate and have access to the feature, while the rest will go to the default behavior.
Here's an example of how you can do this:
const user = { userID: '12345', email: '12345@gmail.com', ... }; const showNewDesign = await Statsig.checkGate(user, 'new_homepage_design'); if (showNewDesign) { // New feature code here } else { // Default behavior code here }
In this example, 'new_homepage_design' is the feature gate, and '12345' is the customer ID. You can replace these with your own feature gate and customer IDs.On the other hand, Dynamic Configs are more suitable when you want to send a different set of values (strings, numbers, etc.) to your clients based on specific user attributes, such as country.
Remember to follow best practices for feature gates, such as managing for ease and maintainability, selecting the gating decision point, and focusing on one feature per gate.
Alternatively, you can target your feature by CustomerID. You could either use a Custom Field and then pass a custom field to the SDK {custom: {customer: xyz123} or create a new Unit Type of customerID and then target by Unit ID. For more information on creating a new Unit Type, refer to the Statsig documentation.