Skip to content

Commit

Permalink
Update faq.mdx (#2448)
Browse files Browse the repository at this point in the history
* Update faq.mdx

* Update faq.mdx

* Update faq.mdx

* Update faq.mdx
  • Loading branch information
brock-statsig authored Jan 3, 2025
1 parent 7ea8772 commit bfe2414
Showing 1 changed file with 7 additions and 3 deletions.
10 changes: 7 additions & 3 deletions docs/faq.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,12 @@ sidebar_label: FAQs

### How does bucketing within the Statsig SDKs work?
Bucketing in Statsig is deterministic. Given the same user object and the same state of the experiment or feature gate, Statsig always returns the same result, even when evaluated on different platforms (client or server). Here's how it works:
1. **Salt Creation**: Each experiment or feature gate generates a unique salt.
1. **Salt Creation**: Each experiment or feature gate rule generates a unique salt.
2. **Hashing**: The user identifier (e.g., userId, organizationId) is passed through a SHA256 hashing function, combined with the salt, which produces a large integer.
3. **Bucket Assignment**: The large integer is then subjected to a modulus operation with 10000 (or 1000 for layers), assigning the user to a bucket.
4. **Bucket Determination**: The result defines the specific bucket out of 10000 (or 1000 for layers) where the user is placed.

This process ensures a randomized but deterministic bucketing of users across different experiments or feature gates. The unique salt ensures that the same user can be assigned to different buckets in different experiments.
This process ensures a randomized but deterministic bucketing of users across different experiments or feature gates. The unique salt per-experiment or feature gate rule ensures that the same user can be assigned to different buckets in different experiments. This also means that if you rollout a feature gate rule to 50% - then back to 0% - then back to 50%, the same 50% of users will be re-exposed, **so long as you reuse the same rule** - and not create a new one. See [here](/faq/#when-i-change-the-rollout-percentage-of-a-rule-on-a-feature-gate-will-users-who-passed-continue-to-pass).

A lot of times people assume that we keep track of a list of all ids and what group they were assigned to for experiments, or which IDs passed a certain feature gate. While our data pipelines keep track of which users were exposed to which experiment variant in order to generate experiment results, we do not cache previous evaluations and maintain distributed evaluation state across client and server sdks. That model doesn't scale - we've even talked to customers who were using an implementation like that in the past, and were paying more for a Redis instance to maintain that state than they ended up paying to use Statsig instead.

Expand Down Expand Up @@ -90,7 +90,11 @@ const assignments = statsig.getClientInitializeResponse(userObj, "client-key", {

### When I change the rollout percentage of a rule on a feature gate, will users who passed continue to pass?

Yes. If you increase the rollout percentage (e.g., from 10% to 20%), the original 10% will continue to pass, while an additional 10% will start passing. Reducing the percentage will restore the original 10%. To reshuffle users, you'll need to "resalt" the gate.
Yes. If you increase the rollout percentage (e.g., from 10% to 20%), the original 10% will continue to pass, while an additional 10% will start passing. Reducing the percentage will restore the original 10%. The same behavior exists if you reduce then re-increase the pass percentage. To reshuffle users, you'll need to "resalt" the gate.

This is only true of the same "rule" per gate, if you create a new rule with the same pass percentage as another one, it will pass a different set of users.

Note - today, increasing the allocation percentage of an experiment is not guaranteed to behave the same as the above - if you'd like to have dependably deterministic allocations, we recommend using targeting gates.

---

Expand Down

0 comments on commit bfe2414

Please sign in to comment.