From bfe241485d7ca3ab3a4429753fed5c0ed5caad9a Mon Sep 17 00:00:00 2001 From: brock-statsig <146870406+brock-statsig@users.noreply.github.com> Date: Fri, 3 Jan 2025 13:36:40 -0600 Subject: [PATCH] Update faq.mdx (#2448) * Update faq.mdx * Update faq.mdx * Update faq.mdx * Update faq.mdx --- docs/faq.mdx | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/docs/faq.mdx b/docs/faq.mdx index 91ac1a681..36b1cce4a 100644 --- a/docs/faq.mdx +++ b/docs/faq.mdx @@ -8,12 +8,12 @@ sidebar_label: FAQs ### How does bucketing within the Statsig SDKs work? Bucketing in Statsig is deterministic. Given the same user object and the same state of the experiment or feature gate, Statsig always returns the same result, even when evaluated on different platforms (client or server). Here's how it works: -1. **Salt Creation**: Each experiment or feature gate generates a unique salt. +1. **Salt Creation**: Each experiment or feature gate rule generates a unique salt. 2. **Hashing**: The user identifier (e.g., userId, organizationId) is passed through a SHA256 hashing function, combined with the salt, which produces a large integer. 3. **Bucket Assignment**: The large integer is then subjected to a modulus operation with 10000 (or 1000 for layers), assigning the user to a bucket. 4. **Bucket Determination**: The result defines the specific bucket out of 10000 (or 1000 for layers) where the user is placed. -This process ensures a randomized but deterministic bucketing of users across different experiments or feature gates. The unique salt ensures that the same user can be assigned to different buckets in different experiments. +This process ensures a randomized but deterministic bucketing of users across different experiments or feature gates. The unique salt per-experiment or feature gate rule ensures that the same user can be assigned to different buckets in different experiments. This also means that if you rollout a feature gate rule to 50% - then back to 0% - then back to 50%, the same 50% of users will be re-exposed, **so long as you reuse the same rule** - and not create a new one. See [here](/faq/#when-i-change-the-rollout-percentage-of-a-rule-on-a-feature-gate-will-users-who-passed-continue-to-pass). A lot of times people assume that we keep track of a list of all ids and what group they were assigned to for experiments, or which IDs passed a certain feature gate. While our data pipelines keep track of which users were exposed to which experiment variant in order to generate experiment results, we do not cache previous evaluations and maintain distributed evaluation state across client and server sdks. That model doesn't scale - we've even talked to customers who were using an implementation like that in the past, and were paying more for a Redis instance to maintain that state than they ended up paying to use Statsig instead. @@ -90,7 +90,11 @@ const assignments = statsig.getClientInitializeResponse(userObj, "client-key", { ### When I change the rollout percentage of a rule on a feature gate, will users who passed continue to pass? -Yes. If you increase the rollout percentage (e.g., from 10% to 20%), the original 10% will continue to pass, while an additional 10% will start passing. Reducing the percentage will restore the original 10%. To reshuffle users, you'll need to "resalt" the gate. +Yes. If you increase the rollout percentage (e.g., from 10% to 20%), the original 10% will continue to pass, while an additional 10% will start passing. Reducing the percentage will restore the original 10%. The same behavior exists if you reduce then re-increase the pass percentage. To reshuffle users, you'll need to "resalt" the gate. + +This is only true of the same "rule" per gate, if you create a new rule with the same pass percentage as another one, it will pass a different set of users. + +Note - today, increasing the allocation percentage of an experiment is not guaranteed to behave the same as the above - if you'd like to have dependably deterministic allocations, we recommend using targeting gates. ---