diff --git a/docs/access-management/scim/concepts.mdx b/docs/access-management/scim/concepts.mdx index 5dfd72d6f..de22af662 100644 --- a/docs/access-management/scim/concepts.mdx +++ b/docs/access-management/scim/concepts.mdx @@ -8,13 +8,6 @@ import AlertTitle from "@mui/material/AlertTitle"; # SCIM Concepts for Statsig -:::info - -### Open Beta - -SCIM is now available in Open Beta. Contact us to get started. For support, reach out to us on Slack. -::: -
Our SCIM implementation represents both Statsig users at the Organization level and Project level with their associated roles. There are two major resources in our SCIM diff --git a/docs/access-management/scim/okta_scim.mdx b/docs/access-management/scim/okta_scim.mdx index b88123150..0bb634315 100644 --- a/docs/access-management/scim/okta_scim.mdx +++ b/docs/access-management/scim/okta_scim.mdx @@ -8,13 +8,6 @@ import AlertTitle from "@mui/material/AlertTitle"; # Okta SCIM Provisioning -:::info - -### Open Beta - -SCIM is now available in Open Beta. Contact us to get started. For support, reach out to us on Slack. -::: -
This guide outlines the process for setting up SCIM (System for Cross-domain Identity Management) integration between Statsig and Okta. This integration allows for automated diff --git a/docs/access-management/scim/overview.mdx b/docs/access-management/scim/overview.mdx index 26b015eed..4219bf769 100644 --- a/docs/access-management/scim/overview.mdx +++ b/docs/access-management/scim/overview.mdx @@ -8,13 +8,6 @@ import AlertTitle from "@mui/material/AlertTitle"; # SCIM User Provisioning -:::info - -### Open Beta - -SCIM is now available in Open Beta. Contact us to get started. For support, reach out to us on Slack. -::: - ## Introduction SCIM (System for Cross-domain Identity Management) is a standardized protocol that simplifies the automation of user provisioning and management across multiple platforms. diff --git a/docs/client/React/_reference.mdx b/docs/client/React/_reference.mdx index 140422afb..7b2a81d74 100644 --- a/docs/client/React/_reference.mdx +++ b/docs/client/React/_reference.mdx @@ -56,7 +56,7 @@ The StatsigProvider is a [react context provider](https://reactjs.org/docs/conte - `children: React.ReactNode | React.ReactNode[]` - One or more child components - `sdkKey: string` - A client SDK key from the Statsig Console -- `user: StatsigUser` - A [StatsigUser](/client/concepts/user) object. Changing this will update the experiment and gate values, causing a re-initialization and rerender +- `user: StatsigUser` - A [StatsigUser](/server/concepts/user) object. Changing this will update the experiment and gate values, causing a re-initialization and rerender - `options?: StatsigOptions` - See [StatsigOptions](/client/deprecated/reactSDK#statsig-options). An optional bag of initialization properties (mostly shared with the statsig-js sdk) for advanced configuration. - `waitForInitialization?: boolean` - - `initializingComponent?: React.ReactNode | React.ReactNode[]` - A loading component to render if and only if `waitForInitialization` is set to `true` and the SDK is initializing @@ -70,7 +70,7 @@ also be leveraged for apps that do not require loading states. - `children: React.ReactNode | React.ReactNode[]` - One or more child components - `sdkKey: string` - A client SDK key from the Statsig Console -- `user: StatsigUser` - A [StatsigUser](/client/concepts/user) object. Changing this will update the experiment and gate values, causing a re-initialization and rerender +- `user: StatsigUser` - A [StatsigUser](/server/concepts/user) object. Changing this will update the experiment and gate values, causing a re-initialization and rerender - `options?: StatsigOptions` - See [StatsigOptions](/client/deprecated/reactSDK#statsig-options). An optional bag of initialization properties (mostly shared with the statsig-js sdk) for advanced configuration. - `initializeValues: Record` - JSON object, generated by a Statsig Server SDK. See [Server Side Rendering](#ssr). diff --git a/docs/client/ReactNative/_reference.mdx b/docs/client/ReactNative/_reference.mdx index 0c66b55f0..6a143a254 100644 --- a/docs/client/ReactNative/_reference.mdx +++ b/docs/client/ReactNative/_reference.mdx @@ -14,7 +14,7 @@ StatsigProvider is a [react context provider](https://reactjs.org/docs/context.h - `children: React.ReactNode | React.ReactNode[];` - One or more child components - `sdkKey: string;` - A client SDK key from the Statsig Console -- `user: StatsigUser;` - A [Statsig User](/client/concepts/user) object. Changing this will update the user and Gate values, causing a re-initialization +- `user: StatsigUser;` - A [Statsig User](/server/concepts/user) object. Changing this will update the user and Gate values, causing a re-initialization - `options?: StatsigOptions;` - See [StatsigOptions](/client/deprecated/reactNativeSDK#statsig-options). An optional bag of initialization properties (shared with the statsig-js sdk) for advanced configuration. - `waitForInitialization?: boolean;` - Waits for the SDK to initialize with updated values before rendering child components - `initializingComponent?: React.ReactNode | React.ReactNode[];` - A loading component to render iff waitForInitialization is set to true and the SDK is initializing diff --git a/docs/client/Roku/_initialize.mdx b/docs/client/Roku/_initialize.mdx index c51fe8ed4..79a7fd80d 100644 --- a/docs/client/Roku/_initialize.mdx +++ b/docs/client/Roku/_initialize.mdx @@ -26,7 +26,7 @@ Next, you can initialize the library in your init() function, and add a listener m.statsig.initialize("", user) ``` -For more information on all of the user fields you can use, see the [StatsigUser docs](/client/concepts/user). +For more information on all of the user fields you can use, see the [StatsigUser docs](/server/concepts/user). Before the SDK has loaded the updated values, all APIs will return default values (false for gates, empty configs and experiments). To implement a callback handler for statsig being ready, and tell the SDK to load the updated values in the `onStatsigReady` function observed above: diff --git a/docs/client/concepts/initialize.mdx b/docs/client/concepts/initialize.mdx index 519cc51be..4111d81bf 100644 --- a/docs/client/concepts/initialize.mdx +++ b/docs/client/concepts/initialize.mdx @@ -1,28 +1,149 @@ --- -title: Initializing a Client SDK +title: Initializing SDKs sidebar_label: Initializing --- -One of the first steps in using a Statsig client SDK is to call the asynchronous `initialize()` method in the SDK language of your choice. -If you're looking for synchronous initialization or server side rendering, skip down to the bottom -### General Flow +import Tabs from "@theme/Tabs"; +import TabItem from "@theme/TabItem"; +import GitHubEmbed from "@site/src/components/GitHubEmbed"; + +One of the first steps in using a Statsig client SDK is to call the asynchronous `initialize()` method in the SDK language of your choice. This retrieves the values you need to evaluate flags or experiments and send events: in client SDKs, only the events for your defined user are provided, while Server SDKs fetch your entire ruleset. + +## General Initialization Flow + + + `initialize` will take an SDK key and `StatsigUser` object, as well as a set of options to parameterize the SDK. The sdk will then do a few things: 1. Check local storage for cached values. The SDK caches the previous evaluations locally so they are available on the next session without a successful network call -2. Create a cache a `STATSIG_STABLE_ID` for experimenting on `stableID` - e.g. for experimenting and stable evaluations across the logged out to logged in boundary, where there userID may go from undefined to known. -3. Set the SDK as initialized so checks won't throw - they will either return cached values or defaults. +2. Create a `STATSIG_STABLE_ID` - an ID that stays consistent per-device, which can often be helpful for logged-out experiments. +3. Set the SDK as initialized so checks won't throw errors - they will either return cached values or defaults. 4. Issue a network request to Statsig to get the latest values for all gates/experiments/configs/layers/autotunes for the given user. If the project definition does not change from the most recent cached values, this request may succeed without returning new data. 5. Resolve the asynchronous `initialize` call. If the request to the server failed, the SDK will have the cached values or return defaults for this session. +Depending on when you check a gate or experiment after initializing, its possible that you may not have retrieved fresh values yet. Awaiting the return of the initialize function is one easy way to do this, but has speed disadvantages, discussed below. + + + + + +Server SDKs require only a secret key to initialize. As most servers are expected to deal with many users - server SDKs download all rules and configurations you have in your project and evaluate them in realtime for each user that is encountered. The process for Server SDK initialization looks something like this: +1. Your server checks if you have locally cached values (which you can set up with a [DataAdapter](/server/concepts/data_store/)) +2. If your server found values on the last call, it'll be ready for checks with the reason "DataAdapter". Whether it found local data or not, it'll next go to the network to find updated values. +3. The Server retrieves updated rules from the server, and is now ready for checks even if it didn't find values in step 1. +4. Going forward, the server will retrieve new values every 10 seconds from the server, updating the locally cached values each time. + +DataAdapters provide a layer of resilience, and ensure your server is ready to server requests as soon as it startups rather than waiting for a network roundtrip, which can be especially valuable if you have short-lived or serverless instances. While only recommended for advanced setups, Statsig also offers a [Forward Proxy](/server/concepts/forward_proxy/) that can add an extra layer of resilience to your server setup. + + + + + +## Client Initialization Strategies + +We now offer several strategies for initializing `StatsigClient` — allowing customers to fine-tune and minimize latency. The various strategies carry some tradeoffs, and should be carefully considered based on your performance requirements and experimentation needs. + +Below are the various strategies summarized at a high level, ordered from most common to least common: + +- [**Asynchronous Initialization (Awaited)**](#1-asynchronous-initialization---awaited): Ensures user assignments are available after initialization but adds latency due to awaiting fresh assignments being fetched over the network from Statsig servers. +- [**Bootstrap Initialization**](#2-bootstrap-initialization): Best of both worlds for latency and availability of fresh assignments, but requires additional engineering effort. Pass up-to-date Statsig values down with other server responses, minimizing latency. +- [**Asynchronous Initialization (Not Awaited)**](#3-asynchronous-initialization---not-awaited): This ensures immediate rendering, but in a state that reflects stale assignments or no assignments available (resulting in default values being used). + - After initialization, the client will then fetch fresh assignments over the network from Statsig. Subsequent calls to check assignments may result in different assignments than the initial state and therefore render a different treatment (_this is referred to as "flicker"_). This mimics the old behavior of `StatsigProvider.waitForInitialization=false`. +- [**Synchronous Initialization**](#4-synchronous-initialization): Ensures immediate rendering but with stale or no assignments available. First-visit users will never be assigned to gates and experiments. These users will only see updated assignments after they do a hard-refresh of the website. Effectively, all assignment information is 1 page load stale. + +### 1. Asynchronous Initialization - Awaited +> Ensures latest assignments but requires a loading state + +When calling `StatsigClient.initializeAsync`, the client loads values from the cache and fetches the latest values from the network. This approach waits for the latest values before rendering, which means it is not immediate but ensures the values are up-to-date. + +Example: + + + + + + + + + + + + +### 2. Bootstrap Initialization +> Ensures both latest assignments with no rendering latency + +Bootstrapping allows you to initialize the client with a JSON string. This approach ensures that values are immediately available without the client making any network requests. Note that you will be responsible for keeping these values up to date. +With this approach, your server will be responsible for serving the configuration payload to your client app on page load (for web implementations) or during app launch (for mobile implementations). + +This architecture requires running a server SDK that supports the `getClientInitializeResponse` method. +Your server SDK will maintain a fresh configuration in memory and when a request hits your route handler, you should call `getClientInitializeResponse()`, passing in a StatsigUser Object to generate the configuration object that gets passed to the client SDK for synchronous initialization. + +#### Implementation Notes: +* You should pass the same user object on both the server and client side - take care that these stay in sync. This can become particularly hairy in the case of keeping stableID in sync, as it'll re-generate when it doesn't exist. See [here](https://docs.statsig.com/client/javascript-sdk#keeping-stableid-consistent-across-client--server). +* The `initializeValues` option should be an Object - except in our js SDK, where its expected to be a string. Calling .stringify() on the object should work. + +Example: + + + + + + + + + + + + + +### 3. Asynchronous Initialization - Not Awaited + +If you want to fetch the latest values without awaiting the asynchronous call, you can call `initializeAsync` and catch the promise. +This approach provides immediate rendering with cached values initially, which will update to the latest values mid-session. + +:::caution +Be aware that the values may switch when checked a second time after the latest values have been loaded. +::: + +Example: + + + +### 4. Synchronous Initialization +> Ensures immediate rendering but uses cached assignments (when available) + +When calling `StatsigClient.initializeSync`, the client uses cached values if they are available. The client fetches new values in the background and updates the cache. This approach provides immediate rendering, but the values might be stale or absent during the first session. + +Example: + + + +These strategies help you balance the need for the latest gate / experiment assignment information with the need for immediate application rendering based on your specific requirements. + -### Synchronous Initialization -If asynchronous initialization is not an option for your performance requirements, you can initialize the SDK synchronously - throughout the docs, we refer to how you can accomplish this as "Server Side Rendering." -The client SDK still needs to know the gate and experiment values for the given user at initialization time, but you can use a server SDK to generate those values, so when your first network roundtrip completes, you can initialize the statsig SDK synchronously rather than waiting for a network round trip to Statsig servers. -The server SDK will take in a user object and use its local evaluation to generate the same response that the `/initialize` network call would generate. +### /Initialize Response Schema -This integration requires a client sdk and a server SDK, so it is a bit more setup, but it will give you the most performant way to integrate Statsig in your website. We currently only support this on the web (js, react) and mobile (react native, expo, android, ios) SDKs. +Provided for reference if you're implementing Bootstrapping - the job of your server is to provide the values that Statsig's servers other wise would when they call /initialize. Statsig's getClientInitializeResponse function provides this payload. -### Schema of /initialize response ```typescript /** Specs for Dynamic Configs */ dynamic_configs: { diff --git a/docs/client/concepts/parameter-stores.mdx b/docs/client/concepts/parameter-stores.mdx index 1b6d9cc64..432f61b96 100644 --- a/docs/client/concepts/parameter-stores.mdx +++ b/docs/client/concepts/parameter-stores.mdx @@ -1,22 +1,34 @@ --- title: Parameter Stores -sidebar_label: Parameter Stores +sidebar_label: Using Parameter Stores slug: /client/concepts/parameter-stores --- -:::info -Parameter Stores are only available for Statsig Client SDKs. -::: - -Parameter Stores provide a new way to organize and manage parameters in your web or mobile app via the Statsig console. They are now available for the Statsig JS, React, React Native, Android, iOS, and Dart SDKs. +Parameter Stores provide a new way to organize and manage parameters in your web or mobile app via the Statsig console. Now available for JS, React, React Native, Android, iOS, and Dart SDKs with Server SDKs coming soon - let us know in [Slack](https://statsig.com/slack) if you'd like support in a certain language soon. ## **What is a Parameter Store?** Rather than thinking in terms of Statsig entities like Feature Gates, Experiments, Layers, or Dynamic Configs, Parameter Stores let you focus on **parameters**—the values in your app that need to be configurable remotely. -Parameter Stores decouple your code from the configuration, much like Statsig Layers decouple your code from specific experiments. This level of abstraction allows you to run experiments, adjust gating, or change values on the fly—**without hardcoding experiment names** in your app. +Parameter Stores **decouple your code from configuration, indefinitely**. This abstraction allows you to run experiments, adjust gating, or change values on the fly— **without hardcoding any experiment/gate names**. Instead you define *parameters* that can be remapped remotely to any value or any Statsig entity. + +## **An Example: Parameterizing the Statsig Website** + +While usually release cycles are more painful on platforms like mobile, take the example of the Statsig Website - perhaps our marketing team asks for frequent updates, so we'd prefer to parameterize the text, images, buttons, colors and more: + + + + +When the time comes to run an experiment, we can point these variables directly at experiments - starting an A/B test without writing a line of code: + + + +Now, you've begun an experiment on your tagline, without ever making a code change. You continue to access the parameter in-code like this: -Each parameter can be remapped between Statsig entities (as long as the parameter type remains the same). These parameters will receive dynamic values depending on the `StatsigUser` the SDK is initialized with, and they can be updated anytime without requiring a mobile code change or waiting for your mobile release cycle. +```javascript +const homepageStore = StatsigClient.getParameterStore("www_homepage"); +const tagline = homepageStore.get("tagline", ); +``` ## **How to Use Parameter Stores** diff --git a/docs/client/html/_faqs.mdx b/docs/client/html/_faqs.mdx index 94f069116..d5ffaaf54 100644 --- a/docs/client/html/_faqs.mdx +++ b/docs/client/html/_faqs.mdx @@ -7,5 +7,17 @@ Yes, you can remove the client API key from the url and see the [Javascript SDK You will need to create your own instance differently than if you were installing the sdk via npm: ```js -const client = new window.__STATSIG__.StatsigClient("client-test", { userID: "123"}, {}); -``` \ No newline at end of file +const { StatsigClient, runStatsigAutoCapture, runStatsigSessionReplay } = window.Statsig; + +const client = new StatsigClient( + '', + { userID: 'a-user' } +); + +runStatsigSessionReplay(client); +runStatsigAutoCapture(client); + +await client.initializeAsync(); + +// check gates, configs, experiments, or log events +``` diff --git a/docs/client/html/_initialize.mdx b/docs/client/html/_initialize.mdx index 9b724d977..0618baf76 100644 --- a/docs/client/html/_initialize.mdx +++ b/docs/client/html/_initialize.mdx @@ -14,12 +14,16 @@ To manually initialize an instance of the sdk, remove the key parameter from the ```js -const client = new window.__STATSIG__.StatsigClient( - "client-test", - { userID: "123"}, // StatsigUser - {}, // StatsigOptions +const { StatsigClient, runStatsigAutoCapture, runStatsigSessionReplay } = window.Statsig; + +const client = new StatsigClient( + '', + { userID: 'a-user' } ); +runStatsigSessionReplay(client); +runStatsigAutoCapture(client); + await client.initializeAsync(); // check gates, configs, experiments, or log events diff --git a/docs/client/introduction.mdx b/docs/client/introduction.mdx index 4cdca7481..a48791831 100644 --- a/docs/client/introduction.mdx +++ b/docs/client/introduction.mdx @@ -15,12 +15,12 @@ As shown in the diagram below, implementing an experiment using a Statsig client ### 1. Initialization -- The client SDK's `initialize` call takes the **client SDK key** and a [**StatsigUser**](/client/concepts/user) object. First, it checks for cached values from a previous initialize in local storage, and then it makes a network request to Statsig servers; this network call fetches precomputed configuration parameters for the specified user from Statsig servers and stores these parameters in local storage on the client device. +- The client SDK's `initialize` call takes the **client SDK key** and a [**StatsigUser**](/server/concepts/user) object. First, it checks for cached values from a previous initialize in local storage, and then it makes a network request to Statsig servers; this network call fetches precomputed configuration parameters for the specified user from Statsig servers and stores these parameters in local storage on the client device. If the request fails, the previous cached values are used. - Statsig's server latency to service `initialize` calls is generally 10ms (p50); the latency for given client may vary depending on how far the device is from Statsig's servers; the client SDK has a built-in timeout of 3 seconds that you can configure using **StatsigOptions** when you initialize the SDK -- The [**StatsigUser**](/client/concepts/user) object that you provide in the `initialize` call should include the user identifier, _userID_, that you use to identify the end-user of your application; the client SDK also generates a device identifier called _stableID_ to enable experiments where users aren’t signed in and a _userID_ is not available; you can choose to override this _stableID_ through **StatsigOptions** using the _overrideStableID_ parameter when you initialize the SDK +- The [**StatsigUser**](/server/concepts/user) object that you provide in the `initialize` call should include the user identifier, _userID_, that you use to identify the end-user of your application; the client SDK also generates a device identifier called _stableID_ to enable experiments where users aren’t signed in and a _userID_ is not available; you can choose to override this _stableID_ through **StatsigOptions** using the _overrideStableID_ parameter when you initialize the SDK ### 2. Checking an Experiment @@ -49,7 +49,7 @@ If the request fails, the previous cached values are used. :::info Best Practices -**Using [**StatsigUser**](/client/concepts/user)** +**Using [**StatsigUser**](/server/concepts/user)** Learn how to use [StatsigUser](/server/concepts/user) while using a client SDK. diff --git a/docs/client/javascript-mono/ReactNativeUsage.mdx b/docs/client/javascript-mono/ReactNativeUsage.mdx index 1242676ed..62ee0f6dc 100644 --- a/docs/client/javascript-mono/ReactNativeUsage.mdx +++ b/docs/client/javascript-mono/ReactNativeUsage.mdx @@ -46,6 +46,9 @@ export const ReactNativeAdvanced = _ReactNativeAdvanced; import * as _reactGatesAndConfigs from "./react/_reactGatesAndConfigs.mdx"; export const ReactGatesAndConfigs = _reactGatesAndConfigs; +import * as _reactNativeLoadingState from "./_reactNativeLoadingState.mdx"; +export const ReactNativeLoadingState = _reactNativeLoadingState; + export const Builder = SDKDocsBuilder({ sections: [ [ @@ -57,6 +60,7 @@ export const Builder = SDKDocsBuilder({ [ReactNativeInstall, {}], [ReactNativeSetup, {}], [ReactGatesAndConfigs, { packageName: "react-native" }], + [ReactNativeLoadingState, {}], [ReactNativeAdvanced, {}], ] }) diff --git a/docs/client/javascript-mono/_InitStrategies.mdx b/docs/client/javascript-mono/_InitStrategies.mdx index 942b9458f..2ffa0b9b0 100644 --- a/docs/client/javascript-mono/_InitStrategies.mdx +++ b/docs/client/javascript-mono/_InitStrategies.mdx @@ -85,10 +85,4 @@ Example: -These strategies help you balance the need for the latest gate / experiment assignment information with the need for immediate application rendering based on your specific requirements. - - - - - - +These strategies help you balance the need for the latest gate / experiment assignment information with the need for immediate application rendering based on your specific requirements. \ No newline at end of file diff --git a/docs/client/javascript-mono/_reactNativeAdvanced.mdx b/docs/client/javascript-mono/_reactNativeAdvanced.mdx index 816c795e8..50419c282 100644 --- a/docs/client/javascript-mono/_reactNativeAdvanced.mdx +++ b/docs/client/javascript-mono/_reactNativeAdvanced.mdx @@ -3,7 +3,7 @@ ### StatsigClient Outside the Component Tree In some scenarios, you may need to use the `StatsigClient` when you are not in the React component tree. Things like background tasks or handling notifications. -For these, you can use the Expo specific `StatsigClientRN`. +For these, you can use the RN-specific `StatsigClientRN`. ```typescript import { StatsigClientRN } from '@statsig/react-native-bindings'; diff --git a/docs/client/javascript-mono/_reactNativeLoadingState.mdx b/docs/client/javascript-mono/_reactNativeLoadingState.mdx new file mode 100644 index 000000000..85f2baf76 --- /dev/null +++ b/docs/client/javascript-mono/_reactNativeLoadingState.mdx @@ -0,0 +1,58 @@ +import Tabs from "@theme/Tabs"; +import TabItem from "@theme/TabItem"; + + +## Loading State + +Dependent on your setup, you may want to wait for the latest values before checking a gate or experiment. +If you are using the `StatsigProviderRN`, you can pass in a `loadingComponent` prop to display a loading state while the SDK is initializing. +If you are using the `useClientAsyncInitRN` hook, you can check the `isLoading` prop to determine if the SDK is still loading. + + + + + + +```tsx +export function App() { + const loadingComponent =
Loading...
; + + return ( + + + + ); +} +``` + +
+ + +```tsx +export function App() { + const { client, isLoading } = useClientAsyncInitRN(...); + + if (isLoading) { + return
Loading...
; + } + + return ( + + + + ); +} +``` + +
+
+ diff --git a/docs/client/javascript-mono/nextjs/NextJsUsage.mdx b/docs/client/javascript-mono/nextjs/NextJsUsage.mdx index 0fb396e20..d9ba79fe9 100644 --- a/docs/client/javascript-mono/nextjs/NextJsUsage.mdx +++ b/docs/client/javascript-mono/nextjs/NextJsUsage.mdx @@ -34,6 +34,7 @@ import * as GetParamStore from "./_nextJsParamStore.mdx"; import * as Proxy from "./_nextJsProxy.mdx"; import * as LogEvent from "./_nextJsLogEvent.mdx"; import * as SrAndAc from "../_sessionReplayAutoCapture.mdx"; +import * as Advanced from "./_nextJsAdvanced.mdx"; export const Builder = SDKDocsBuilder({ sections: [ @@ -51,9 +52,10 @@ export const Builder = SDKDocsBuilder({ [GetExperiment, {}], [GetParamStore, {}], [LogEvent, {}], + [SrAndAc, {}], + [Advanced, {}], [Bootstrap, {}], [Proxy, {}], - [SrAndAc, {}] ] }) diff --git a/docs/client/javascript-mono/nextjs/_nextJSPageVsAppRouter.mdx b/docs/client/javascript-mono/nextjs/_nextJSPageVsAppRouter.mdx index a3d4d3c91..8197c3c60 100644 --- a/docs/client/javascript-mono/nextjs/_nextJSPageVsAppRouter.mdx +++ b/docs/client/javascript-mono/nextjs/_nextJSPageVsAppRouter.mdx @@ -1,7 +1,7 @@ import Tabs from "@theme/Tabs"; import TabItem from "@theme/TabItem"; -## Integrating with Next.js +## Basic Usage with Next.js Statsig supports both the [Page Router](https://nextjs.org/docs/pages) and [App Router](https://nextjs.org/docs/app) in Next.js. There are some differences in how you integrate Statsig into each. @@ -35,22 +35,21 @@ import React, { useEffect } from "react"; import { LogLevel, StatsigProvider, - useClientAsyncInit, } from "@statsig/react-bindings"; -import { runStatsigAutoCapture } from "@statsig/web-analytics"; +import { StatsigAutoCapturePlugin } from '@statsig/web-analytics'; export default function MyStatsig({ children }: { children: React.ReactNode }) { - const { client } = useClientAsyncInit( - process.env.NEXT_PUBLIC_STATSIG_CLIENT_KEY!, - { userID: "a-user" }, - { logLevel: LogLevel.Debug } // Optional - Prints debug logs to the console + return ( + + {children} + ); - - useEffect(() => { - runStatsigAutoCapture(client); - }, [client]); - - return {children}; } ``` @@ -86,9 +85,6 @@ export default function RootLayout({ - - - {/* Page Router */} @@ -105,25 +101,20 @@ import type { AppProps } from "next/app"; import { LogLevel, StatsigProvider, - useClientAsyncInit, } from "@statsig/react-bindings"; -import { runStatsigAutoCapture } from "@statsig/web-analytics"; +import { StatsigAutoCapturePlugin } from '@statsig/web-analytics'; export default function App({ Component, pageProps }: AppProps) { - const { client } = useClientAsyncInit( - process.env.NEXT_PUBLIC_STATSIG_CLIENT_KEY!, - { userID: "a-user" }, - { logLevel: LogLevel.Debug } // Optional - Prints debug logs to the console - ); - - useEffect(() => { - runStatsigAutoCapture(client); - }, [client]); - return ( - - - + + {children} + ; ); } ``` diff --git a/docs/client/javascript-mono/nextjs/_nextJsAdvanced.mdx b/docs/client/javascript-mono/nextjs/_nextJsAdvanced.mdx new file mode 100644 index 000000000..3a65ab9ff --- /dev/null +++ b/docs/client/javascript-mono/nextjs/_nextJsAdvanced.mdx @@ -0,0 +1,3 @@ +## Advanced Setup + +We offer deeper integrations with Next.js that improve the performance and stability of your Statsig integration. If you were just trying to log an event or check your first gate, your project is configured and ready to go with the above basic setup instructions. diff --git a/docs/client/javascript-mono/nextjs/_nextJsBootstrap.mdx b/docs/client/javascript-mono/nextjs/_nextJsBootstrap.mdx index 4da377317..1b3c891f5 100644 --- a/docs/client/javascript-mono/nextjs/_nextJsBootstrap.mdx +++ b/docs/client/javascript-mono/nextjs/_nextJsBootstrap.mdx @@ -1,14 +1,13 @@ import Tabs from "@theme/Tabs"; import TabItem from "@theme/TabItem"; - -## Client Bootstrapping (Optional but Recommended) +### Client Bootstrapping (Recommended) Bootstrapping is a method of keeping updated values on your server (in the case of Next, your node webserver), and sending them down with the frontend code when a request is made. This has the advantage of preventing an additional network request before your content is displayed, improving your site's responsiveness. This also enables Statsig usage on server-rendered components. -While the performance gains are appealing, bootstrapping requires some additional setup effort, and is strictly optional. +While the performance gains are appealing, bootstrapping requires some additional setup effort, and you must be mindful of which code you are running server side and client side. -### Install `statsig-node` +#### Install `statsig-node` To generate the required values, we can use the Statsig server SDK (`statsig-node`) on our backend. @@ -65,7 +64,7 @@ STATSIG_SERVER_KEY=secret- # <- Added this line -### Integrate the Backend Logic +#### Integrate the Backend Logic { -### Apply the Bootstrap Values +#### Apply the Bootstrap Values @@ -231,10 +230,9 @@ import { LogLevel, StatsigProvider, StatsigUser, - // useClientAsyncInit, // <- Remove this useClientBootstrapInit, // <- Add this } from "@statsig/react-bindings"; -import { runStatsigAutoCapture } from "@statsig/web-analytics"; +import { StatsigAutoCapturePlugin } from '@statsig/web-analytics'; import React, { useEffect } from "react"; export default function MyStatsig({ @@ -244,25 +242,24 @@ export default function MyStatsig({ bootstrapValues: { data: string; user: StatsigUser; key: string }; children: React.ReactNode; }) { - // Update to using useClientBootstrapInit instead of useClientAsyncInit + // Update to using useClientBootstrapInit instead of auto initializing in the provider const client = useClientBootstrapInit( bootstrapValues.key, bootstrapValues.user, bootstrapValues.data, - { logLevel: LogLevel.Debug } // Optional - Prints debug logs to the console + { + logLevel: LogLevel.debug, + plugins: [ new StatsigAutoCapturePlugin() ] + } ); - useEffect(() => { - runStatsigAutoCapture(client); - }, [client]); - return {children}; } ``` If you load the app now, you should see the same values as your previous implementation, this time without any additional network requests. -### Managing StableIDs +#### Managing StableIDs Statsig generates [StableIDs](/client/javascript-sdk/stable-id/) as a pseudo-ID for logged-out experiments and user management. StableIDs are generated client-side, but when boostrapping, values are generated on the server, creating undesirable side-effects like stableIDs regenerating more than logical for any one device/user. A simple cookie can solve this problem, with an implementation pattern suggested [here](/client/javascript-sdk#keeping-stableid-consistent-across-client--server). @@ -306,6 +303,14 @@ We do this conditionally, so that Statsig only runs on pages that call `getStats ```tsx // pages/_app.tsx +import { + LogLevel, + StatsigProvider, + StatsigUser, +} from "@statsig/react-bindings"; +import { runStatsigAutoCapture } from '@statsig/web-analytics'; +import React, { useEffect } from "react"; + export default function App({ Component, pageProps }: AppProps) { const clientRef = useRef(); diff --git a/docs/client/javascript-mono/nextjs/_nextJsProxy.mdx b/docs/client/javascript-mono/nextjs/_nextJsProxy.mdx index 9c41b593a..8fad73e7f 100644 --- a/docs/client/javascript-mono/nextjs/_nextJsProxy.mdx +++ b/docs/client/javascript-mono/nextjs/_nextJsProxy.mdx @@ -2,7 +2,9 @@ import Tabs from "@theme/Tabs"; import TabItem from "@theme/TabItem"; -## Proxying Network Traffic +### Proxying Network Traffic (Optional) + +If you want to harden your integration and protect your feature gating/experimentation values/event logging from being blocked by ad blockers, you can set up a network proxy. It is possible to route all Statsig network traffic through your Next.js server. There are a few reasons why you might want to set this up. @@ -56,7 +58,7 @@ We will need to setup Next.js [`API Routes`](https://nextjs.org/docs/pages/build -### Add Route `/initialize` +#### Add Route `/initialize` @@ -136,7 +138,7 @@ export default async function handler( -### Add Route `/log_event` +#### Add Route `/log_event` ( - - - arrow_forward - - -); - - - -## Walkthrough Guides - -
- - - - - - -
- -## SDKs - -export const SDKCard = ({ language, image, link }) => ( - -
- {language} -
- - - - -
-); - - - -
- - - - - - - -
-
- -
- - - - - - - -
-
-
- -We also provide an HTTP API. Our API is a great choice if an SDK isn't -available for your environment yet, as you can use it in any type of -application: - -- [HTTP API](/http-api) - -## Tools - -
- - - - - - - - - - -
- -## Filing bugs - -You can file bug reports or feature requests via github issues in our Statsig Feedback repository - - # Need more help? - -Statsig strives to provide the best support possible. You can - -- Join our slack support channel for live supports: Join our slack support -- Schedule a live demo: Schedule a demo diff --git a/docs/developer-guides/abtest-in-javascript.md b/docs/developer-guides/abtest-in-javascript.md index ea4a57b4c..fb1ad1035 100644 --- a/docs/developer-guides/abtest-in-javascript.md +++ b/docs/developer-guides/abtest-in-javascript.md @@ -42,7 +42,7 @@ In your `app.js`, write the following JavaScript code: ```javascript document.addEventListener('DOMContentLoaded', function () { - const client = new window.__STATSIG__.StatsigClient( + const client = new window.Statsig.StatsigClient( "your-client-sdk-key", { userID: 'user_unique_id', diff --git a/docs/experiments-plus/differential-impact-detection.md b/docs/experiments-plus/differential-impact-detection.md index 0fadac771..1475f3839 100644 --- a/docs/experiments-plus/differential-impact-detection.md +++ b/docs/experiments-plus/differential-impact-detection.md @@ -11,7 +11,7 @@ Statsig will automatically flag experiments when extreme differential impacts ar ![image](https://github.com/user-attachments/assets/9783ba7a-812b-4fea-97af-4e3344f8345f) ## Enabling this -On Statsig Warehouse Native, configure the "Segments of Interest" you want automatically evaluated for Differential Impact Detection. These will either have to be configured as [Entity Properties](/statsig-warehouse-native/features/entity-properties) or passed in by a Statsig SDK as user properties in the [User Object](/client/concepts/user). +On Statsig Warehouse Native, configure the "Segments of Interest" you want automatically evaluated for Differential Impact Detection. These will either have to be configured as [Entity Properties](/statsig-warehouse-native/features/entity-properties) or passed in by a Statsig SDK as user properties in the [User Object](/server/concepts/user). ![image](https://github.com/user-attachments/assets/c1bc4f51-2c8c-4db7-87f5-7a883f7e0fcf) diff --git a/docs/experiments-plus/experimentation/best-practices.md b/docs/experiments-plus/experimentation/best-practices.md deleted file mode 100644 index 5ddd571c9..000000000 --- a/docs/experiments-plus/experimentation/best-practices.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -title: Best Practices -sidebar_label: Best Practices -slug: /experiments-plus/experimentation/best-practices ---- - -For a good overview on Experiment Design, Monitoring and Readout, see [this article](https://statsig.com/blog/product-experimentation-best-practices). diff --git a/docs/experiments-plus/experimentation/choosing-randomization-unit.md b/docs/experiments-plus/experimentation/choosing-randomization-unit.md deleted file mode 100644 index b253077e7..000000000 --- a/docs/experiments-plus/experimentation/choosing-randomization-unit.md +++ /dev/null @@ -1,88 +0,0 @@ ---- -title: Choosing the Randomization Unit -sidebar_label: Choosing the Randomization Unit -slug: /experiments-plus/experimentation/choosing-randomization-unit ---- - -# Choosing the Unit of Randomization -When designing an experiment, you will pick a **unit of randomization** to decide *who* or *what* is randomly allocated to your control and treatment groups. -The choice of the randomization unit is critical in experiment design as it impacts your user experience as well as the accuracy of your experiment's results. -Choosing the right randomization unit will ensure that you deliver a consistent experience to your users and get reliable results from your experiments. - -## Some units are better than others -A key assumption in most A/B tests and experiments is the **stable unit treatment value assumption (SUTVA)**, where the response of a unit of randomization under treatment is -independent of the response of another unit under treatment. The most common unit of randomization is the user identifier your application uses to uniquely identify an individual user. -You may choose to use other types of identifiers based on the kind of experiment you want to run and the constraints around it as outlined below. - -### User Identifiers -**Registered User IDs** are the most commonly used units of randomization. Your application would generally create a registered user ID after the user has registered with your application and created an account. -Available as long as the user stays signed-in, the user ID is the most popular unit of randomization as it ensures a consistent user experience, -across sessions and devices. It doesn't depend on client-side cookies, which may be cleared by the user at any time. - ---- -**Learn More** - -You can supply a user ID as part of the ``StatsigUser`` object when you implement an feature gate or experiment on Statsig. See [Statsig Docs](/client/concepts/user) to learn more. - ---- - -### Other Stable Identifiers -**Device IDs** or **Anonymous User IDs** are used as units of randomization for experiments that involve users who haven't registered or signed into your application. -For example, you may choose to use a device ID or an anonymous user ID when you want to test the impact of different landing page options on user registration. -As the device is a stable vehicle of usage for the user, it offers a stable identifier to observe the user's behavior over their journey with your application. -As a variant of this approach, some applications may choose identify anonymous users by saving first party cookies on the user's device. - -#### Drawbacks - While these identifiers offer a stable tracking mechanism, they do have certain drawbacks. - - The main drawback is that you won't have access to this identifier if the same user engages with your application on a different device. - - A less common drawback arises when multiple users may use the same device. In this case, you will end up including their combined engagement in the metrics you use to evaluate experiments. - - As these identifiers are device-specific, they are available only with client SDKs to help you instrument the client-side of your application. These identifiers are not available with server SDKs. - - ---- -**Learn More** - -- Read more about [User-level vs. Device-level experiments](https://blog.statsig.com/user-level-vs-device-level-experiments-with-statsig-338d48a81778) and how these identifiers are used to report the right experiment results. -- Statsig client SDKs automatically generate **Stable IDs** for your users when you choose to run a device-level experiment. See the [Statsig Guide for Device Experiments](../../guides/first-device-level-experiment) to learn more about how to use stable IDs for experiments involving anonymous users. - ---- - -### Other Identifiers -**Session IDs** are used in select use cases where the metric you're trying to improve is meaningful within a session *and* when you can safely assume that each session is independent. -An example where you may choose to use session IDs when running experiments to optimize conversion rate for guest checkouts that are tracked on a per session basis. - Another use case for sessions IDs is when you need an identifier for use with server SDKs but want to run experiments for users who haven't yet unregistered or have signed-out. - -#### Drawbacks -As users frequently remember their experience from an earlier session, assuming user sessions to be independent can be a significant assumption to make for most experiments. -For example, if a user sees a product in one session as part of the control group and returns to complete the purchase in a different session, there's no guarantee they'll be placed in the control group again. -If this time they're placed in the treatment group, you may overestimate the positive impact of the treatment. - - -## Using multiple identifiers -When you're running multiple experiments, you may choose to use a different identifier for each experiment depending on the context. -Consider a scenario where you're running two experiments as shown below. One experiment (A) tracks the impact of a new mobile registration flow on the number of user registrations. -Another experiment (B) tracks the impact of a new upgrade flow for converting your registered users to subscribed users. -Ideally, you also want to track the how your new mobile registration flow impacts downstream conversion to subscribed users. - -In this scenario, experiment A will require an identifier that you can use over the entire user journey, say a stable device ID. -For experiment B, you may prefer to use the user ID that forms the basis for most of your existing business metrics such as the rate of conversion to your subscription products. - - -![Device Level Experiments](https://user-images.githubusercontent.com/74588208/141707011-95c0c859-c60f-45f8-a6da-d31664f05e06.png) - - - - - - - - - - - - - - - - - diff --git a/docs/experiments-plus/experimentation/common-terms.md b/docs/experiments-plus/experimentation/common-terms.md deleted file mode 100644 index 6315c5a69..000000000 --- a/docs/experiments-plus/experimentation/common-terms.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Common Terms -sidebar_label: Common Terms -slug: /experiments-plus/experimentation/common-terms ---- - -- A **Control Variable** is an experimental variable that is thought to influence the key metrics of interest. In simple A/B experiments, a single control variable is assigned two values. It is more common to assign multiple values such as A,B,C, or D to a single control variable. Multivariate experiments evaluate multiple control variables that allow experimenters to discover a global optimum when multiple variables interact. -- A **Variant** is a product or feature experience being tested, often by assigning values to control variables. In a simple A/B experiments, A and B are two variants, usually called Control and Treatment. -- A **Randomization Unit** is the most granular unit that can participate in an experiment. Each eligible unit is randomly assigned to a variant, allowing causality to be determined with high probability. It is very common to use users as a randomization unit and Statsig highly recommends using users for running controlled experiments. -- **Statistical Significance** can be assessed using multiple approaches. Two of these approaches are using the p-value and the confidence interval: - - The **p-value** measures the probability of the metric lift you observe (or a more extreme lift) assuming that the variant you’re testing has no effect. The standard is to use a p-value less than 0.05 to identify variants that have a statistically significant effect. A p-value less than 0.05 implies that there’s less than 5% chance of seeing the observed metric lift (or a more extreme metric lift) if the variant had no effect. In practice, a p-value that's lower than your pre-defined threshold is treated as evidence for there being a true effect. - - A **confidence interval** examines whether the metric difference between the variant and control overlaps with zero. A 95% confidence interval is the range that covers the true difference 95% of the time. It is usually centered around the observed delta between the variant and control with an extension of 1.96 standard errors on each side. - diff --git a/docs/experiments-plus/experimentation/scenarios.md b/docs/experiments-plus/experimentation/scenarios.md deleted file mode 100644 index 59e3ffa22..000000000 --- a/docs/experiments-plus/experimentation/scenarios.md +++ /dev/null @@ -1,24 +0,0 @@ ---- -title: Scenarios -sidebar_label: Scenarios -slug: /experiments-plus/experimentation/scenarios ---- - -Statsig sees two broad scenarios for experimentation. - -## 1. Experiment to grow faster - -Experiments can help climb a hill to a local optimum based on your current business strategy and product portfolio. For example, -- Experiments can optimize for the ideal user experience for a given functionality -- Experiments can help iterate on the functionality, algorithms, and infrastructure that matter the most to your users and your business -- Experiments can identify proposals with the highest return for effort required - -Identifying the metrics that both reflect your strategic direction and are sensitive to the changes you make ensures that you don’t waste resources. Identifying guardrail metrics that you want to hold regardless of the changes you make compels explicit tradeoffs and prevents you from regressing on the fundamental needs of your business. - -## 2. Experiment to discover faster - -Experiments can help develop a portfolio of ideas that may point to a larger hill or opportunity. Navigating these bigger jumps may require: -- Experiments that run for a longer duration to mitigate any novelty effects and to ensure that you have given the new product version enough time to build adoption -- Experiments that ramp slowly and progressively to more users to limit risk and to build more statistical power before launch -- Many different experiments that test several related hypotheses that form a new business strategy - diff --git a/docs/experiments-plus/experimentation/why-experiment.md b/docs/experiments-plus/experimentation/why-experiment.md deleted file mode 100644 index ebe953357..000000000 --- a/docs/experiments-plus/experimentation/why-experiment.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Why Experiment -sidebar_label: Why Experiment -slug: /experiments-plus/experimentation/why-experiment ---- - -Controlled experiments are the best scientific way to establish causality between your product features and customer impact. Establishing such causality allows you to only ship features that improve customer experience. This can make experiments the driving force behind your **pace of innovation**. - -As you grow your pace of innovation, experiments also enable you to also measure the success of the features you ship and uncover unexpected side effects with every code change. This allows you to iterate faster in the short term, establish key business drivers, and **make better, evidence-driven business decisions every day**. - -In comparison, relationships observed in historical metrics cannot be considered structural or causal because multiple uncaptured external and internal factors influence customer behavior. Historical metrics establish correlation, not causation. diff --git a/docs/experiments-plus/introduction.md b/docs/experiments-plus/introduction.md index 820e69e4c..020caf5aa 100644 --- a/docs/experiments-plus/introduction.md +++ b/docs/experiments-plus/introduction.md @@ -80,7 +80,7 @@ Advantages: - Persistent across sessions and devices. - Independent of client-side cookies, which can be cleared by users. -For more details on using User IDs with Statsig, see [Statsig Docs on User Identifiers](/client/concepts/user). +For more details on using User IDs with Statsig, see [Statsig Docs on User Identifiers](/server/concepts/user). ### 2. Device Identifiers diff --git a/docs/faq.mdx b/docs/faq.mdx index c94a3fced..f267e26a1 100644 --- a/docs/faq.mdx +++ b/docs/faq.mdx @@ -8,12 +8,12 @@ sidebar_label: FAQs ### How does bucketing within the Statsig SDKs work? Bucketing in Statsig is deterministic. Given the same user object and the same state of the experiment or feature gate, Statsig always returns the same result, even when evaluated on different platforms (client or server). Here's how it works: -1. **Salt Creation**: Each experiment or feature gate generates a unique salt. +1. **Salt Creation**: Each experiment or feature gate rule generates a unique salt. 2. **Hashing**: The user identifier (e.g., userId, organizationId) is passed through a SHA256 hashing function, combined with the salt, which produces a large integer. 3. **Bucket Assignment**: The large integer is then subjected to a modulus operation with 10000 (or 1000 for layers), assigning the user to a bucket. 4. **Bucket Determination**: The result defines the specific bucket out of 10000 (or 1000 for layers) where the user is placed. -This process ensures a randomized but deterministic bucketing of users across different experiments or feature gates. The unique salt ensures that the same user can be assigned to different buckets in different experiments. +This process ensures a randomized but deterministic bucketing of users across different experiments or feature gates. The unique salt per-experiment or feature gate rule ensures that the same user can be assigned to different buckets in different experiments. This also means that if you rollout a feature gate rule to 50% - then back to 0% - then back to 50%, the same 50% of users will be re-exposed, **so long as you reuse the same rule** - and not create a new one. See [here](/faq/#when-i-change-the-rollout-percentage-of-a-rule-on-a-feature-gate-will-users-who-passed-continue-to-pass). A lot of times people assume that we keep track of a list of all ids and what group they were assigned to for experiments, or which IDs passed a certain feature gate. While our data pipelines keep track of which users were exposed to which experiment variant in order to generate experiment results, we do not cache previous evaluations and maintain distributed evaluation state across client and server sdks. That model doesn't scale - we've even talked to customers who were using an implementation like that in the past, and were paying more for a Redis instance to maintain that state than they ended up paying to use Statsig instead. @@ -26,6 +26,11 @@ No. Once an experiment is started, you cannot change the layer. This restriction --- +### Can you change an experiment or gate name after creating it? +No. We've intentionally decided to not allow any Statsig config (Feature Gate, Experiment, Layer, etc.) to be renamed - as renaming a config that is already integrated in your code can have serious undesirable consequences. The exception to this is Metrics, which have display names not used in code. + +--- + ### Why should I define parameters for my experiments instead of just getting the group? Defining parameters for experiments provides flexibility and speed in iteration. Many companies, such as Facebook, Uber, and Airbnb, follow this approach in their experimentation platforms because it allows: @@ -60,7 +65,30 @@ For details on flushing, check the [Node.js Server SDK documentation](/server/no --- ### I don't see my client or server language listed. Can I still use Statsig? -If none of our current SDKs meet your needs, please let us know via our [Slack community](https://statsig.com/slack)! + +If none of our current SDKs meet your needs, please let us know via our [Slack community](https://statsig.com/slack)! + +--- + +### How do I get all exposures for a user? + +If you're interested in historical exposures, the console's [users tab](https://console.statsig.com/users) may serve your needs. + +If you need all hypothetical assignments, you can consider using the `getClientInitializeResponse` server sdk method. Statsig's SDKs should ideally be invoked at the time you're serving an experiment treatment, so that an exposure can be logged. If that's not possible in your case (perhaps you need to pass assignment information to other applications, or to use assignment information as cache-keys for the CDN + edge), this approach could work. + +#### Example of capturing all assignments in Node + +Note, this method is designed to [bootstrap](/client/concepts/initialize#bootstrapping-overview) client SDKs, and as such, will hash the experiment and feature keys returned in the payload, obfuscating their names for security. You can provide an optional `hash` parameter, allowing you to disable hashing and capture all values in plain text: [Node](https://github.com/statsig-io/node-js-server-sdk/blob/ea116142221c1aa83b46eff8b5f2292c8f8e2d54/src/StatsigServer.ts#L597), [Python](https://github.com/statsig-io/node-js-server-sdk/blob/ea116142221c1aa83b46eff8b5f2292c8f8e2d54/src/StatsigServer.ts#L597), [Java](https://github.com/statsig-io/java-server-sdk/blob/7443c357c78616142de9257af9e4c55c877ca700/src/main/kotlin/com/statsig/sdk/StatsigServer.kt#L83), [Go](https://github.com/statsig-io/go-sdk/blob/3d7edcbe468efb0fc7a04b0d10202243403dce5f/client.go#L282). + +```node +const assignments = statsig.getClientInitializeResponse(userObj, "client-key", {hash: "none"}); +``` + +--- + +### What happens if I check a config with a non-existent name? + +You'll receive default values - false for feature flags, and the in-code defaults for experiments or layer parameters. You should expect to see "Unrecognized" evaluation reasons - see our [Debugging Section](/sdk/debugging#evaluation-reason). This behavior will be the same on a non-existent config vs. one that is deleted, one that is archived, or one that your current SDK instance can't see because of [target apps](/sdk-keys/target-apps/). --- @@ -68,7 +96,11 @@ If none of our current SDKs meet your needs, please let us know via our [Slack c ### When I change the rollout percentage of a rule on a feature gate, will users who passed continue to pass? -Yes. If you increase the rollout percentage (e.g., from 10% to 20%), the original 10% will continue to pass, while an additional 10% will start passing. Reducing the percentage will restore the original 10%. To reshuffle users, you'll need to "resalt" the gate. +Yes. If you increase the rollout percentage (e.g., from 10% to 20%), the original 10% will continue to pass, while an additional 10% will start passing. Reducing the percentage will restore the original 10%. The same behavior exists if you reduce then re-increase the pass percentage. To reshuffle users, you'll need to "resalt" the gate. + +This is only true of the same "rule" per gate, if you create a new rule with the same pass percentage as another one, it will pass a different set of users. + +Note - today, increasing the allocation percentage of an experiment is not guaranteed to behave the same as the above - if you'd like to have dependably deterministic allocations, we recommend using targeting gates. --- @@ -132,8 +164,12 @@ Enterprise plans can support multiple projects. If you might be interest in this ## Platform Usability ### When should I create a new project? -Projects have distinct boundaries. Create a new project when you're managing a separate product with unique user IDs and metrics. +Projects have distinct boundaries. If you're using the same userIDs and metrics across surfaces, apps or environments, put them in the same project. Create a new project when you're managing a separate product with unique user IDs and metrics. + +For example, if you have a marketing website (anonymous users) and a product (signed-in users), you may want to separate them. However, if you want to track success across both you should manage them in the same project. (e.g. from user signup on the marketing website to user engagement within the product) -For example, if you have a marketing website (anonymous users) and a product (signed-in users), you may want to separate them. However, if you want to track success across both, you should manage them in the same project. +Some reasons to NOT create a new project +- to segregate by environment. Statsig has rich support for environments - you can even customize these. You can turn features or experiments on and off by environment. +- to segregate by platform. If you have an iOS app and Web app - it's helpful to have both collect data in the same project and capture metadata on platform. This lets you look at data by platform, but also understand if you've increased the overall metric - or just cannibalized users (pushed the same users from platform to the other platform). --- diff --git a/docs/feature-flags/permanent-and-stale-gates.md b/docs/feature-flags/permanent-and-stale-gates.md index f08fa66c1..227f9b210 100644 --- a/docs/feature-flags/permanent-and-stale-gates.md +++ b/docs/feature-flags/permanent-and-stale-gates.md @@ -46,6 +46,7 @@ In your feature gates catalog, you'll see different **Types** displayed in the S - **STALE_PROBABLY_DEAD_CHECK** There have been no checks in the last 30 days. - **STALE_PROBABLY_LAUNCHED** The Gate is marked as launched or has an everyone rule passing 100% (rollout rate of 100%). - **STALE_PROBABLY_UNLAUNCHED** The Gate is marked as disabled or has an everyone rule passing 0% (rollout rate of 0%). + - **STALE_PROBABLY_FORGOTTEN** This gate appears to have been only partially launched for some time. You might want to launch/disable it, or make it permanent if you need to keep it around. - **STALE_NO_RULES** The Gate has no set rules. ## Nudges to clean up Stale gates diff --git a/docs/guides/first-dynamic-config.mdx b/docs/guides/first-dynamic-config.mdx index 18a53fb54..45f16fbbb 100644 --- a/docs/guides/first-dynamic-config.mdx +++ b/docs/guides/first-dynamic-config.mdx @@ -91,7 +91,7 @@ Now let's use this Dynamic Config to create a different landing page experience After adding the SDK to the webpage via the [jsdelivr cdn](https://www.jsdelivr.com/package/npm/@statsig/js-client), we initialize the SDK: ```js -const client = new window.__STATSIG__.StatsigClient(", {}); +const client = new window.Statsig.StatsigClient("", {}); ``` Now, let's fetch our config and construct the banner: diff --git a/docs/guides/first-feature.mdx b/docs/guides/first-feature.mdx index 76f5b4ef9..eeb078e37 100644 --- a/docs/guides/first-feature.mdx +++ b/docs/guides/first-feature.mdx @@ -103,7 +103,7 @@ document.getElementsByTagName("head")[0].appendChild(scrpt); Copy and paste the following code in your console, being sure to replace `YOUR_SDK_KEY` with the Client API Key you copied in Step 4: ```js -const client = new window.__STATSIG__.StatsigClient( +const client = new window.Statsig.StatsigClient( "YOUR_SDK_KEY", {}, ); diff --git a/docs/guides/logging-events.mdx b/docs/guides/logging-events.mdx index 595f158bf..34ba5663a 100644 --- a/docs/guides/logging-events.mdx +++ b/docs/guides/logging-events.mdx @@ -18,7 +18,7 @@ For general guidance on event logging and core concepts, read on, or jump to the ## Identifying Users and the "StatsigUser" object {#identifying-users} Many analytics platforms have a concept of "identifying" a user. In Statsig, this is the StatsigUser object that is set a initialization time in client SDKs, or with each event in Server SDKs. -The [`StatsigUser`](/client/concepts/user) is a set of properties that describe the user. It roughly has the same json definition across all SDKs and integrations: +The [`StatsigUser`](/server/concepts/user) is a set of properties that describe the user. It roughly has the same json definition across all SDKs and integrations: ```json { diff --git a/docs/guides/private-attributes.mdx b/docs/guides/private-attributes.mdx index 0d3f1938c..79dbe92c8 100644 --- a/docs/guides/private-attributes.mdx +++ b/docs/guides/private-attributes.mdx @@ -43,7 +43,7 @@ Evaluation happening locally to the server on `privateAttributes` in the `statsi Don't just take our word for it - all of our SDKs are open source and [available on github](https://github.com/statsig-io). Feel free to dive in to the implementation of `privateAttributes` in the SDK you are using, or reach out to us on [slack](https://www.statsig.com/slack) and we can point you in the right direction. :::info -To ensure that user PII is never transmitted over the network back to Statsig during Client SDK initialization, you should use [Client Boostrapping](/client/concepts/bootstrapping) and provide the `privateAttributes` as part of the user object on the server to the `getClientInitializeResponse()` call. This will generate all of the assignments locally on your server, and these assignments can then be passed as `initializeValues` to the client SDK, negating the need to send any user attributes from the client device to Statsig. +To ensure that user PII is never transmitted over the network back to Statsig during Client SDK initialization, you should use [Client Boostrapping](/client/concepts/initialize#bootstrapping-overview) and provide the `privateAttributes` as part of the user object on the server to the `getClientInitializeResponse()` call. This will generate all of the assignments locally on your server, and these assignments can then be passed as `initializeValues` to the client SDK, negating the need to send any user attributes from the client device to Statsig. ::: ## Event Logging diff --git a/docs/guides/shopify-ab-test.mdx b/docs/guides/shopify-ab-test.mdx index 6e0ecbdc0..faafba3be 100644 --- a/docs/guides/shopify-ab-test.mdx +++ b/docs/guides/shopify-ab-test.mdx @@ -49,10 +49,16 @@ Below is boilerplate custom pixel code that provides a function to send events b ```js /** * Util function for tracking events back to statsig +* Find the user's stableID checking js-sdk >= v2 then fallback to v1 storage key */ +const stableID = (function () { + for (var key in localStorage) { + if (key.includes('statsig.stable_id.')) return localStorage.getItem(key).replace(/"/gi, ''); + } +})() || localStorage.getItem('STATSIG_LOCAL_STORAGE_STABLE_ID'); const statsigEvent = async (eventKey, eventValue = null, metadata = {}, userObject = {}) => { Object.assign(userObject, { - customIDs: {stableID: localStorage.getItem('STATSIG_LOCAL_STORAGE_STABLE_ID')} // attach stableID automatically + customIDs: {stableID: stableID} // attach stableID automatically }); await fetch('https://events.statsigapi.net/v1/log_event', { method: 'POST', @@ -94,4 +100,4 @@ Whether you're using Shopify's [Hydrogen app](https://shopify.dev/docs/storefron #### Integrating data sources for experiment metrics Along with the measuring simple click stream and point-of-sale behavior as [outlined here](http://localhost:3004/guides/first-shopify-abtest#configure-event-tracking-and-metrics), commerce businesses performing deeper experimentation often want to integrate offline data systems and measure experiments using existing metrics that the broader business uses. -Commonly, the Data Warehouse is the source of truth for user purchase data and other categories of offline data. This affords customers the ability to define more [bespoke metrics](/statsig-warehouse-native/configuration/metrics#metric-types) using filtering, aggregations and incorporating other datasets in the warehouse for segmenting experiment results. \ No newline at end of file +Commonly, the Data Warehouse is the source of truth for user purchase data and other categories of offline data. This affords customers the ability to define more [bespoke metrics](/statsig-warehouse-native/configuration/metrics#metric-types) using filtering, aggregations and incorporating other datasets in the warehouse for segmenting experiment results. diff --git a/docs/guides/sidecar-experiments/creating-experiments.mdx b/docs/guides/sidecar-experiments/creating-experiments.mdx index 5fdd6fd51..729c49e43 100644 --- a/docs/guides/sidecar-experiments/creating-experiments.mdx +++ b/docs/guides/sidecar-experiments/creating-experiments.mdx @@ -13,7 +13,7 @@ This guide assumes you have followed the previous steps of installing side-car, Navigate to the web page you want to experiment on. -![image](https://github.com/statsig-io/.github/assets/74588208/3a783170-1c69-4720-8d4d-8f4005999c7c) +![image](/img/sidecarfull.png) ### Step 2: New experiment @@ -31,7 +31,7 @@ _You can configure URL targeting using the following methods:_ * Exact Match - The page URL must match the exact value specified here. * Regex - Regular expressions, for example `(http|https):\/\/www.statsig.com\/pricing` matches pages `http://www.statsig.com/pricing` or `https://www.statsig.com/pricing`, and will activate this experiment on those pages. -![image](/img/sidecaruls.png) +![image](/img/sidecarurls.png) ### Step 4: Add actions @@ -56,11 +56,11 @@ Click on the yellow *Target element path* text-box. This will activate an eleme Now as you move your mouse over your web page you'll see a red selection rectangle. Choose the element you want by clicking on it. In this example, we're choosing the main Headline. -![image](https://github.com/statsig-io/.github/assets/74588208/bffdf35e-bff5-4a62-93ae-15cad3dd8d05) +![image](/img/sidecarselect.png) Sidecar will now reflect the path of the element that was selected. -![image](https://github.com/statsig-io/.github/assets/74588208/be0525cb-0da8-45bf-b4ab-e7d2bf0d23ed) +![image](/img/sidecarpath.png) ### Step 6: Update content diff --git a/docs/guides/sidecar-experiments/measuring-experiments.mdx b/docs/guides/sidecar-experiments/measuring-experiments.mdx index 8f00a978d..3d40c7621 100644 --- a/docs/guides/sidecar-experiments/measuring-experiments.mdx +++ b/docs/guides/sidecar-experiments/measuring-experiments.mdx @@ -66,8 +66,8 @@ Sidecar comes loaded with an event collection tool that will autocapture various |metadata.page_url|Current URL with path and parameters|https://www.FULL-URL.com/?utm=FALL_2024| |metadata.transfer_bytes|Total number of bytes transferred in document body as implemented by [browser performanceTiming API](https://developer.mozilla.org/en-US/docs/Web/Performance/Navigation_and_resource_timings)|48360| -#### Disabling Autocapture -To disable autocapture, simply append the following query string parameter to the Sidecar script URL: `&autostart=0`. +#### Disabling All Logging +To disable all logging to statsig (both autocapture events and logging who has seen your experiments) append the following query string parameter to the Sidecar script URL: `&autostart=0`. This may be useful if you're dealing with GDPR compliance, and you can later re-enable events with `client.updateRuntimeOptions({disableLogging: false})` ## Auto Capturing Data Attributes diff --git a/docs/guides/sidecar-experiments/publishing-experiments.mdx b/docs/guides/sidecar-experiments/publishing-experiments.mdx index 73eed9397..f4a9d3af8 100644 --- a/docs/guides/sidecar-experiments/publishing-experiments.mdx +++ b/docs/guides/sidecar-experiments/publishing-experiments.mdx @@ -1,5 +1,5 @@ --- -sidebar_label: Publishing Experiments +sidebar_label: Launching Experiments title: Taking your experiments to production --- @@ -32,7 +32,12 @@ The code itself would look like this with your API Key switched. ### Step 2: Publish the experiments -Once you have installed sidecar into your code, you are able to start the experimentation configuration. Once you are satisfied with the experiment configuration, go ahead and hit the green *Publish* button. This will push all the experiment changes to Statsig. If you want to make sure these changes are published, you can on the `...` menu and choose *Go to Experiment console* +Once you are satisfied with the experiment configuration, go ahead and hit the blue *Publish* button. This is essentially a way to store all of your configurations in Statsig. If you want to make sure these changes have been stored successfully, you can on the `...` menu and choose *Go to Experiment Console*. + +Publishing changes will not start any experiments, it will do the following: +* Sync any unsaved changes to Statsig (making them accessible in Console where you can configure Metrics and other targeting conditions if applicable). +* Include any configured tests in the Sidecar script installed on your website. +* Allow you to QA experiments on your site while they're in an Unstarted state. ![statsig banner](/img/sidecarconsole.png) diff --git a/docs/guides/sidecar-experiments/setup.mdx b/docs/guides/sidecar-experiments/setup.mdx index 3fc3c8eca..1dc90641d 100644 --- a/docs/guides/sidecar-experiments/setup.mdx +++ b/docs/guides/sidecar-experiments/setup.mdx @@ -22,13 +22,13 @@ Click on the Extensions toolbar button and select "Statsig Sidecar" to activate You will now see an Experiment Config UI like this: -![image](https://github.com/statsig-io/.github/assets/74588208/0e5ef15f-b601-415c-a41b-95f0dade434b) +![statsig banner](/img/sidecarempty.png) ### Step 3: Update settings You will need to update API keys in the Settings Dialog for the extension to work. You can invoke the Settings dialog from the "Settings" link on the top header. -![image](https://github.com/statsig-io/.github/assets/74588208/43e939bf-e34f-404a-8ca3-cd5c045221f0) +![statsig banner](/img/sidecarsettings.png) You can retrieve these keys from your Statsig project. In order to get this, login to Statsig Console here: https://console.statsig.com and navigate to the Settings page (https://console.statsig.com/settings) diff --git a/docs/guides/uptime.mdx b/docs/guides/uptime.mdx index dbda5bdb3..d76645a3a 100644 --- a/docs/guides/uptime.mdx +++ b/docs/guides/uptime.mdx @@ -19,7 +19,7 @@ Collected here are a set of best practices that help maximize your uptime - acro 6. Use **change management** on Statsig in production. Changes should be approved by a reviewer. For critical areas, you can enforce an Allowed Reviewer group that has enough context to decide. Statsig Feature Gates allow you to easily audit and roll back changes. -7. **Caching on client SDKs**: Initializing Statsig client SDKs requires them to connect to Statsig and download config. Client SDKs can cache and reuse config (for the same user) if they are offline. You can also choose to bootstrap your client from your own server (and remove the round trip to Statsig) by using [client SDK bootstrapping](/client/concepts/bootstrapping). +7. **Caching on client SDKs**: Initializing Statsig client SDKs requires them to connect to Statsig and download config. Client SDKs can cache and reuse config (for the same user) if they are offline. You can also choose to bootstrap your client from your own server (and remove the round trip to Statsig) by using [client SDK bootstrapping](/client/concepts/initialize#bootstrapping-overview). 8. **Caching on server SDKs**: Initializing Statsig server SDKs requires them to connect to Statsig and download config. If connectivity to Statsig fails, initialization fails (falling back to default values). Remove this dependency when connectivity fails by providing this config locally. Read more about [dataAdapter](/server/concepts/data_store#dataadapter-or-datastore) diff --git a/docs/server/concepts/monitoring.mdx b/docs/infrastructure/monitoring.mdx similarity index 65% rename from docs/server/concepts/monitoring.mdx rename to docs/infrastructure/monitoring.mdx index a14fe018e..e96c8f744 100644 --- a/docs/server/concepts/monitoring.mdx +++ b/docs/infrastructure/monitoring.mdx @@ -1,29 +1,12 @@ --- title: Monitoring the SDK -sidebar_label: SDK Monitoring -slug: /server/concepts/sdk_monitoring +sidebar_label: SDK Monitoring Integrations +slug: /sdk_monitoring --- -Statsig SDKs provide two main ways of monitoring the SDK's behavior and performance: -1. **Logs**: The SDK logs important events and errors to help you understand event-by-event how Statsig is behaving in your application. -2. **Metrics**: The SDK emits metrics to help you understand the aggregate performance of the SDK its impact on your application. - -**Supported SDKs**: Most SDKs have some level of logging, but our latest release of structured logging and metrics, is currently only [available by the Python SDK](/server/pythonSDK/#sdk-monitoring-). - -## Logging Levels and Expected Information -The Statsig SDK uses multiple logging levels to communicate various information types. Here’s what each logging level represents and what kind of details you can expect: - -- Debug: Detailed logs useful for new users onboarding with the SDK and for diagnosing potential issues, such as: - - Messages when a feature gate does not exist - - Tracking process flows within the SDK -- Info: General information about the SDK’s operation, typically relevant to regular usage, such as: - - Messages regarding SDK initialization, including source and version information - - Notifications when the configuration store is populated -- Warning: Logs about unusual events that may impact functionality but are automatically managed and recovered, such as: - - Messages on non critical errors caught by the SDK - - Notifications about reconnection attempts to gRPC services -- Error: Critical logs about issues that severely impact the SDK’s functionality, such as: - - Messages about initialization failures or timeouts - - Notifications indicating gRPC fallback, suggesting gRPC is unavailable or incorrect configuration + +:::note +This latest release of structured logging and metrics, is currently only [available by the Python SDK](/server/pythonSDK/#sdk-monitoring-). Want it in another? Reach out in our [Support Slack](https://statsig.com/slack). +::: ## SDK Metrics Some Statsig SDKs provide built-in metrics to help you monitor its performance and impact on your application. The specific implementation may vary by programming language, refer to the documentation for the language-specific SDK interface. diff --git a/docs/infrastructure/statsig_domains.md b/docs/infrastructure/statsig_domains.md index 1a566840d..053d181ec 100644 --- a/docs/infrastructure/statsig_domains.md +++ b/docs/infrastructure/statsig_domains.md @@ -23,6 +23,7 @@ These domains are used by our SDKs to communicate with our backend for feature g - `featureassets.org` - `assetsconfigcdn.org` - `prodregistryv2.org` +- `cloudflare-dns.com` ## Statsig User Segment Storage API diff --git a/docs/integrations/data-connectors/google-analytics.mdx b/docs/integrations/data-connectors/google-analytics.mdx index 7997365e2..93c528b89 100644 --- a/docs/integrations/data-connectors/google-analytics.mdx +++ b/docs/integrations/data-connectors/google-analytics.mdx @@ -24,6 +24,19 @@ To send events collected by Statsig's SDKs to GA4, you must configure a Data Str Provide your API Secret and measurement ID from the previous step and click *confirm*: ![](https://user-images.githubusercontent.com/125311112/263366565-c9f97636-8bd3-428f-b2ee-e542776b50ab.png) +4. Verify that you are receiving events now by checking the Realtime overview report for the event with name `statsig`. Account for a couple days of delay for events to be available in other reports. + +5. You can also add the following custom event dimensions. Other custom IDs and custom user attributes are available as user dimensions +
    +
  • `config` - Name of the experiment/gate/dynamic config
  • +
  • `group` - Name of the exposed group (e.g. Control)
  • +
  • `value` - Value for custom events
  • +
  • `statsig_session_id` - Session ID
  • +
  • `category` - Type of exposure or name of the custom event (e.g. `statsig_gate_exposure`)
  • +
  • `unit_id` - Value of the unit ID (e.g. '123')
  • +
  • `unit_id_type` - Type of the unit ID (e.g. 'stableID')
  • +
+ ## Filtering Events Once the outgoing integration has been enabled, you can optionally configure event filtering to control whch events are populating the GA4 Data Stream: diff --git a/docs/integrations/data-connectors/segment.mdx b/docs/integrations/data-connectors/segment.mdx index 8151efc49..fae275c26 100644 --- a/docs/integrations/data-connectors/segment.mdx +++ b/docs/integrations/data-connectors/segment.mdx @@ -93,7 +93,7 @@ If you are unable to connect to Segment via OAuth, you can still manually connec ![](https://user-images.githubusercontent.com/1315028/150830169-17564060-816b-4c5c-ade9-10bf6274265a.png) ## Working with Users -Statsig will join incoming user identifiers to whichever [unit of randomization](/experiments-plus/experimentation/choosing-randomization-unit) you choose. This allows you to be flexible with your experimentation and enables testing on known (userID) and unknown (anonymousID) traffic as well as any custom identifiers your team may have (deviceID, companyID, vehicleID, etc). +Statsig will join incoming user identifiers to whichever [unit of randomization](/experiments-plus#choosing-the-right-randomization-unit) you choose. This allows you to be flexible with your experimentation and enables testing on known (userID) and unknown (anonymousID) traffic as well as any custom identifiers your team may have (deviceID, companyID, vehicleID, etc). ### User IDs and Custom IDs @@ -157,7 +157,7 @@ Refer to the following diagram to help orient you to mapping `anonymousIds` in S ![](https://user-images.githubusercontent.com/125311112/283278011-3c22e6e8-ab36-4844-aee2-b6630ecda4de.png) -1. Initialize the Statsig SDK with your [Statsig User](/client/concepts/user) which will contain an optional `userID` value and a `customID` that you've created in the Statsig UI - `segmentAnonymousId` in this example. +1. Initialize the Statsig SDK with your [Statsig User](/server/concepts/user) which will contain an optional `userID` value and a `customID` that you've created in the Statsig UI - `segmentAnonymousId` in this example. 2. As you orchestrate features/experiments, Statsig will associate this user to a variant using the unit of randomization chosen. For anonymous users, we'll use `segmentAnonymousId`. 3. Your existing Segment implementation tracks user traffic and associates anonymous users to the top-level field `anonymousId`. 4. This `anonymousId` is mapped in Statsig (to `segmentAnonymousId`), properly associating the identifier used in experiment exposures to the same identifier used to track user actions. @@ -172,7 +172,7 @@ By using [Segment Engage Audiences](https://segment.com/docs/engage/audiences/) Once these steps have been completed, your Segment Audience will be synced, and you will be able to target those users for features you develop or experiments you run. ### Custom Properties -Passing [custom properties to a Statsig User](/client/concepts/user#user-attributes) (see `custom` field) enables targeting on specific cohorts of your users in feature gates and experimentation. +Passing [custom properties to a Statsig User](/server/concepts/user#user-attributes) (see `custom` field) enables targeting on specific cohorts of your users in feature gates and experimentation. Providing custom user properties also allows you to drill down your results to specific populations (ex: android/iOS, isVIP, etc) when [reading pulse results](/pulse/custom-queries#running-a-custom-query). If you're using custom fields to [target users](/feature-flags/conditions#custom) in your feature gates, you can provide these properties through Segment using the key `statsigCustom` as part of the Segment `properties` diff --git a/docs/integrations/data-imports/azure_upload-deprecated.mdx b/docs/integrations/data-imports/azure_upload-deprecated.mdx index 4022d9eb2..e4c4d6c7b 100644 --- a/docs/integrations/data-imports/azure_upload-deprecated.mdx +++ b/docs/integrations/data-imports/azure_upload-deprecated.mdx @@ -40,7 +40,7 @@ Please make sure your data conforms to the following schemas. | timeuuid | A unique UUID or timeUUID used for deduping. If omitted, will be generated but will not be effective for deduping | UUID format | ``` -Please refer to docs for the [Statsig User Object](/client/concepts/user#user-attributes) for available fields. An example would look like: +Please refer to docs for the [Statsig User Object](/server/concepts/user#user-attributes) for available fields. An example would look like: ``` { diff --git a/docs/integrations/data-imports/redshift-deprecated.mdx b/docs/integrations/data-imports/redshift-deprecated.mdx index f6add15e8..be86e8ce9 100644 --- a/docs/integrations/data-imports/redshift-deprecated.mdx +++ b/docs/integrations/data-imports/redshift-deprecated.mdx @@ -57,7 +57,7 @@ Please make sure your data conforms to the following schemas. | timeuuid | A unique UUID or timeUUID used for deduping. If omitted, will be generated but will not be effective for deduping | UUID format | ``` -Please refer to docs for the [Statsig User Object](/client/concepts/user#user-attributes) for available fields. An example would look like: +Please refer to docs for the [Statsig User Object](/server/concepts/user#user-attributes) for available fields. An example would look like: ``` { diff --git a/docs/integrations/openai.md b/docs/integrations/openai.md index b73481490..2683357de 100644 --- a/docs/integrations/openai.md +++ b/docs/integrations/openai.md @@ -31,7 +31,7 @@ import time openai.api_key = "your_openai_key" # Replace with your own key statsig.initialize("your_statsig_secret") # Replace with your Statsig secret -user = StatsigUser("user-id") #This is a placeholder ID - in a normal experiment Statsig recommends using a user's actual unique ID for consistency in targeting. See https://docs.statsig.com/client/concepts/user +user = StatsigUser("user-id") #This is a placeholder ID - in a normal experiment Statsig recommends using a user's actual unique ID for consistency in targeting. See https://docs.statsig.com/server/concepts/user ``` ### The ask_question Function @@ -121,7 +121,7 @@ import time openai.api_key = "your_openai_key" statsig.initialize("your_statsig_secret") -user = StatsigUser("user-id") #This is a placeholder ID - in a normal experiment Statsig recommends using a user's actual unique ID for consistency in targeting. See https://docs.statsig.com/client/concepts/user +user = StatsigUser("user-id") #This is a placeholder ID - in a normal experiment Statsig recommends using a user's actual unique ID for consistency in targeting. See https://docs.statsig.com/server/concepts/user def ask_question(): diff --git a/docs/integrations/snippets/integration_event_formats.mdx b/docs/integrations/snippets/integration_event_formats.mdx index 822654436..550f1f718 100644 --- a/docs/integrations/snippets/integration_event_formats.mdx +++ b/docs/integrations/snippets/integration_event_formats.mdx @@ -5,7 +5,7 @@ Events will be sent in batches in a JSON format. The structure of a Statsig Even | Field | Type | Description | | --------------- | ------ | -------------------------------------------------------------------- | | eventName | String | Name of the event provided | -| user | JSON | [Statsig User Object](https://docs.statsig.com/client/concepts/user) | +| user | JSON | [Statsig User Object](https://docs.statsig.com/server/concepts/user) | | userID | String | User ID provided | | timestamp | Number | Timestamp in MS of the event | | value | String | Value of the event provided | diff --git a/docs/integrations/snippets/stitch_event_formats.mdx b/docs/integrations/snippets/stitch_event_formats.mdx index fb3587b4d..f351a16c1 100644 --- a/docs/integrations/snippets/stitch_event_formats.mdx +++ b/docs/integrations/snippets/stitch_event_formats.mdx @@ -5,7 +5,7 @@ Events will be sent in batches in a JSON format. The structure of a Statsig Even | Field | Type | Description | | --------------- | ------ | ------------------------------------------------------------------------------------------------ | | event | String | Name of the event provided | -| user | JSON | [Statsig User Object](/client/concepts/user) | +| user | JSON | [Statsig User Object](/server/concepts/user) | | userId | String | User ID provided | | stableId | String | Stable ID | | timestamp | Number | Timestamp in MS of the event | diff --git a/docs/layers/introduction.md b/docs/layers/introduction.md index 3cbf5b6d9..f3dd9409a 100644 --- a/docs/layers/introduction.md +++ b/docs/layers/introduction.md @@ -1,5 +1,5 @@ --- -title: Introduction to Layers +title: Layers sidebar_label: Layers slug: /layers --- diff --git a/docs/metrics/console.md b/docs/metrics/console.md index 83a2ddd75..4a798628c 100644 --- a/docs/metrics/console.md +++ b/docs/metrics/console.md @@ -30,10 +30,3 @@ The **Metrics Catalog** tab allows you to search and tag your metrics, as well a ![Screen Shot 2022-06-07 at 12 09 40 PM](https://user-images.githubusercontent.com/101903926/172462947-877bbcc7-46b3-45cd-ac57-d0dc2c949d7d.png) - -## Charts -The **Charts** tab shows a set of user-level metric charts that are automatically created based on the events that you log, such as daily/ weekly/ monthly active users, user stickiness, and retention. You can also create custom charts that enable you to visualize customer journeys through your application. - - -![Screen Shot 2022-06-07 at 12 55 08 PM](https://user-images.githubusercontent.com/101903926/172470741-af6294d0-a84a-4630-80f8-827de7e0c03b.png) - diff --git a/docs/metrics/different-id.md b/docs/metrics/different-id.md index 230aaff3d..e3469894c 100644 --- a/docs/metrics/different-id.md +++ b/docs/metrics/different-id.md @@ -10,7 +10,7 @@ There are two common scenarios where the experiment assignment unit differs from 1. Measuring session-level metrics for a user-level experiment. Ratio metrics are commonly used to solve this (this doc). 2. Measuring logged-in metrics (eg. revenue) on a logged-out experiment. There are two solutions: - a. Running the experiment at the [device-level](/experiments-plus/experimentation/choosing-randomization-unit#other-stable-identifiers), with device-level metrics collected even after the user is logged-in. + a. Running the experiment at the [device-level](/guides/first-device-level-experiment), with device-level metrics collected even after the user is logged-in. b. Using [ID resolution](/statsig-warehouse-native/features/id-resolution). We will explain how to set up the first scenario with Warehouse Native in this doc. diff --git a/docs/product-analytics/dashboards.md b/docs/product-analytics/dashboards.md index 82f4d42ab..2dce985cb 100644 --- a/docs/product-analytics/dashboards.md +++ b/docs/product-analytics/dashboards.md @@ -84,13 +84,15 @@ If you want to duplicate or clone any of your dashboards, open the desired dashb You can click on the filters button below the dashboard name to add a global filter to your dashboard. The filter will be applied across all eligible widgets and you can quickly view updated results across all widgets, rather than having to filter each widget individually. You can also use free-form text to apply filters for more generic values, such as filtering emails that contain '@gmail.com'. -![image](https://github.com/user-attachments/assets/9539d980-c647-4c6e-892d-0d1bb5f7f390) +![image](https://github.com/user-attachments/assets/397d0197-632d-4f25-a8be-a5413575173f) + ### Refreshing your Dashboard Widgets To ensure your dashboard data is up to date, simply click the refresh button shown in the image below to refresh all dashboard widgets at once. -![image](https://github.com/user-attachments/assets/901cb3c9-6ad1-47e3-9d44-627cb2ac11d4) +![image](https://github.com/user-attachments/assets/2799df0e-2a71-454b-8fee-df0420cdf68b) + ### Organize your Dashboard @@ -104,11 +106,12 @@ Resize the the widget by clicking and holding the the bottom right edge of the w All of the charts we support in Metrics Explorer can be added to a dashboard. In addition, dashboard charts are not static. -To dive into a chart on the dashboard, click the [ ] icon. Once expanded, you can switch to the “Edit Query and Chart” tab to get the full power of Metrics Explorer, allowing you to modify the overall query and the date range. These modifications enable further exploration without permanently altering the chart on the dashboard. You can then navigate to the "Widget Settings" tab to change the chart title and display additional metadata for more context. +To dive into a chart on the dashboard, click the [ ] icon. Once expanded, you get the full power of Metrics Explorer, allowing you to modify the overall query, the date range and the chart title. These modifications enable further exploration without permanently altering the chart on the dashboard. If you want to save changes to a chart on the dashboard, configure the chart as desired and click "Save" to update the existing chart, or "Save As" to create a new chart on your dashboard. -![image](https://github.com/user-attachments/assets/e8d21373-33f2-43f9-a984-f25365b73080) +![image](https://github.com/user-attachments/assets/aa563da7-eab1-4578-a081-1ad1f343cc5c) + ## Tips diff --git a/docs/product-analytics/drilldown.md b/docs/product-analytics/drilldown.md index ae15fd307..10e9b9cfc 100644 --- a/docs/product-analytics/drilldown.md +++ b/docs/product-analytics/drilldown.md @@ -21,7 +21,7 @@ The Metric Drilldown chart in Metrics Explorer is a versatile tool for understan - **Filtering**: Focus on specific segments or cohorts that are of particular interest. This filtering capability allows for a more targeted analysis, helping you to understand the behaviors and needs of specific user groups. - **Statistical Understanding:** Understand how the average, median, or other percentile value (e.g. p99, p95) of a metric changes over time. - **Dynamic Metric Creation with Formulas**: Craft new metrics on the fly using custom formulas. This flexibility is useful in deriving ad-hoc insights with minimal effort. -- **Flexible Visualization Options**: Choose from a range of visualization formats, like line charts, bar charts, or stacked bar charts, to best represent your data. The right visualization can make complex data more understandable and actionable. +- **Flexible Visualization Options**: Choose from a range of visualization formats, like line charts, bar charts, horizontal bar charts, and stacked bar charts, to best represent your data. The right visualization can make complex data more understandable and actionable. - **Event Samples for Debugging**: Quickly access and analyze a metric’s underlying sample events, and the granular user-level information attached to the event. This feature is particularly useful for troubleshooting and understanding the root causes of trends or anomalies in your data. - **Detailed Data Control**: Adjust the granularity of your data analysis, from high-level overviews to detailed breakdowns. Use features like rolling averages to smooth data for more accurate trend analysis and decision-making. diff --git a/docs/pulse/export.md b/docs/pulse/export.md index d0e5f7fcd..08d5a37ff 100644 --- a/docs/pulse/export.md +++ b/docs/pulse/export.md @@ -6,11 +6,11 @@ slug: /pulse/export ## How to Export Pulse Data -![Finding Export Report](https://user-images.githubusercontent.com/77478319/163510492-e6bff7cf-9d7c-46b2-a276-ec2e550aa9a1.png) +![Finding Export Report](https://graphite-user-uploaded-assets-prod.s3.amazonaws.com/CbjKvuo40oMU45psWLvG/a2d68701-6828-47d2-8fde-b44a5cea4abb.png) You can export your Pulse Results for Feature Gates and Experiments. Simply navigate to the relevant "Pulse Results" page, and click "Export Report". Exporting results can take up to 10 minutes. A notification and an email will be sent when the report is ready, and a link will be available under under Project Settings -> Reports. You can export results only if your Pulse screen has results. -![Export Pulse Report Menu](https://user-images.githubusercontent.com/77478319/163458999-bcf599ec-4564-460a-87ba-c08975589b3b.png) +![Export Pulse Report Menu](https://graphite-user-uploaded-assets-prod.s3.amazonaws.com/CbjKvuo40oMU45psWLvG/5af19e59-f2b7-492b-9dc2-9439e447dbcc.png) ## Report Types @@ -26,11 +26,10 @@ There are three types of export: 1. `_first_exposures.csv` - contains a list of users and their first exposure to the experiment. If this is the only file you are interested in, you can get this by exporting an "Exposures" report which will be much smaller in size. 2. `_user_metrics.csv` - contains a list of experimental users, and their calculated metrics for each day they were enrolled in the experiment. -The availability of these exports are subject to our retention policy. We hold exposures data for up-to 90 days after an experiment is concluded. We hold raw user-level metrics data for 90 days. +The availability of these exports are subject to our retention policy. We hold exposures data for up-to 90 days after an experiment is concluded. We hold raw user-level metrics data for 90 days. ### Pulse Summary File Description - For Feature Gates - | Column Name | Description | | ---------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | name | Name of the Experiment or Feature Gate | @@ -52,10 +51,8 @@ The availability of these exports are subject to our retention policy. We hold | rel_stderr | The estimated standard error of rel_delta (abs_delta/ctrl_mean) | | z_score | The calculated Z-score | - ### Pulse Summary File Description - For Experiments - | Column Name | Description | | ---------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | name | Name of the Experiment or Feature Gate | @@ -72,29 +69,26 @@ The availability of these exports are subject to our retention policy. We hold ### First Exposures File Description -| Column Name | Description | -| --------------------------------- | --------------------------------------------------------------------------------------------- | -| unit_id | Refers to the unit identifier used in the experiment (eg. user_id, stable_id, org_id) | -| name | The name of the gate/experiment | -| rule | For gates, this refers to the rule name | -| experiment_group | The group the user was assigned to | -| first_exposure_utc | The UTC timestamp when the user was first assigned to the experiment | -| first_exposure_pst_date | The date in PST when the user was first assigned to the experiment | -| as_of_pst_date | The date this data was generated | -| user_dimensions | JSON-formatted key-value pairs describing the user's attributes at the time of first exposure | - +| Column Name | Description | +| ----------------------- | --------------------------------------------------------------------------------------------- | +| unit_id | Refers to the unit identifier used in the experiment (eg. user_id, stable_id, org_id) | +| name | The name of the gate/experiment | +| rule | For gates, this refers to the rule name | +| experiment_group | The group the user was assigned to | +| first_exposure_utc | The UTC timestamp when the user was first assigned to the experiment | +| first_exposure_pst_date | The date in PST when the user was first assigned to the experiment | +| as_of_pst_date | The date this data was generated | +| user_dimensions | JSON-formatted key-value pairs describing the user's attributes at the time of first exposure | ### Unit Metrics File Description - -| Column Name | Description | -| ------------------- | ------------------------------------------------------------------------------------------- | -| pst_ds | The 24hr window the the data refers to. All dates are anchored from 12:00a -> 11:59p PST. | -| unit_id | Refers to the unit identifier used in the experiment (eg. user_id, stable_id, org_id) | -| metric_type | The category of the metric | -| metric_name | The name of the metric | -| metric_dimension | The name of the metric dimension. '!statsig_topline' is the overall metric with no slicing. | -| metric_value | The numeric value of the metric | -| numerator | For some metrics, we track the numerator | -| denominator | For some metrics, we track the denominator | - +| Column Name | Description | +| ---------------- | ------------------------------------------------------------------------------------------- | +| pst_ds | The 24hr window the the data refers to. All dates are anchored from 12:00a -> 11:59p PST. | +| unit_id | Refers to the unit identifier used in the experiment (eg. user_id, stable_id, org_id) | +| metric_type | The category of the metric | +| metric_name | The name of the metric | +| metric_dimension | The name of the metric dimension. '!statsig_topline' is the overall metric with no slicing. | +| metric_value | The numeric value of the metric | +| numerator | For some metrics, we track the numerator | +| denominator | For some metrics, we track the denominator | diff --git a/docs/sdks/client-vs-server.mdx b/docs/sdks/client-vs-server.mdx index 153af64a2..6ebf5ec50 100644 --- a/docs/sdks/client-vs-server.mdx +++ b/docs/sdks/client-vs-server.mdx @@ -38,7 +38,7 @@ At a high level, here are some of the key differences between the two types of S **Client SDKs:** - Use a **client SDK key** -- Take a [StatsigUser](/client/concepts/user) object +- Take a [StatsigUser](/server/concepts/user) object - Check for cached values in local storage - Fetch precomputed configuration parameters for the specified user diff --git a/docs/sdks/debugging.mdx b/docs/sdks/debugging.mdx index 0ebb9abeb..8ded8e781 100644 --- a/docs/sdks/debugging.mdx +++ b/docs/sdks/debugging.mdx @@ -23,6 +23,23 @@ When debugging why a certain user got a certain value, there are a number of too ![Screen Shot 2023-04-27 at 11 20 14 AM](https://user-images.githubusercontent.com/74584483/234956317-e65f7fd3-d87d-4616-b905-ee4df097863e.png) +### Logging Levels and Expected Information +Log-line feedback is one of the simplest tools you have to understand how your SDK is behaving. Our SDKs have multiple log levels to decide what information you'd like to receive: + +- Debug: Detailed logs useful for new users onboarding with the SDK and for diagnosing potential issues, such as: + - Messages when a feature gate does not exist + - Tracking process flows within the SDK +- Info: General information about the SDK’s operation, typically relevant to regular usage, such as: + - Messages regarding SDK initialization, including source and version information + - Notifications when the configuration store is populated +- Warning: Logs about unusual events that may impact functionality but are automatically managed and recovered, such as: + - Messages on non critical errors caught by the SDK + - Notifications about reconnection attempts to gRPC services +- Error: Critical logs about issues that severely impact the SDK’s functionality, such as: + - Messages about initialization failures or timeouts + - Notifications indicating gRPC fallback, suggesting gRPC is unavailable or incorrect configuration + + ## Evaluation Details @@ -34,7 +51,7 @@ When debugging why a certain user got a certain value, there are a number of too ### Evaluation Reason **Evaluation reasons** are a way to understand why a certain value was returned for a given check. All SDKs provide the [Data Source](/client/javascript-sdk/init-strategies/) - which is where your Statsig Client/Server instance is getting its data. Newer SDKs also provide a Reason, which lets you know if an individual check was valid or overridden versus how you've initialized. These reasons are intended to be used for debugging and internal logging purposes only, and are sometimes updated in new SDK versions. - + @@ -54,6 +71,9 @@ When debugging why a certain user got a certain value, there are a number of too | `Error` | An unknown error has occurred, and was logged to Statsig servers.| Error| Reach out to us in [Slack](https://statsig.com/slack) for support. | | `Error:NoClient` (js-client-only) | No client was found in your StatsigContext. | Error | You've likely made a call to a Statsig hook outside of a ``, verify your setup and try again. | | `Unrecognized` (old SDKs) | The SDK was initialized, but this gate/experiment/config did not exist in the set of values.| Error| Confirm the experiment or gate is configured in the Statsig console and you're using the correct API key.| + | `NoValues` | You've attempted to initialize, but it didn't successfully retrieve values. | Error | You're either calling initializeSync before users have cached values, or your call to initializeAsync has failed (check that your client key is correct!) | + | `Loading` | You've tried to initialize, but it hasn't finished yet. | Error | If you're using initializeAsync, you may need to await it, or otherwise prevent config checks before values are loaded. | + | `Uninitialized` | You haven't attempted to initialize yet. | Error | Ensure you're explicitly calling initializeAsync() or initializeSync(), or check if you've passed any StatsigOptions that could prevent network requests from happening. | ### #2. Reason (new SDKs only) @@ -67,7 +87,7 @@ When debugging why a certain user got a certain value, there are a number of too For example: `Network:Recognized` means the sdk had up to date values from a successful initialization network request, and the gate/config/experiment you were checking was defined in the payload. - If you are not sure why a config was not included (resulting in an "Unrecognized" source), it could be excluded due to [Target Apps](/sdk-keys/target-apps), or [Client Bootstrapping](/client/concepts/bootstrapping). + If you are not sure why a config was not included (resulting in an "Unrecognized" source), it could be excluded due to [Target Apps](/sdk-keys/target-apps), or [Client Bootstrapping](/client/concepts/initialize#bootstrapping-overview). diff --git a/docs/sdks/getting-started.md b/docs/sdks/getting-started.mdx similarity index 70% rename from docs/sdks/getting-started.md rename to docs/sdks/getting-started.mdx index 6b46900e7..0ba956e2d 100644 --- a/docs/sdks/getting-started.md +++ b/docs/sdks/getting-started.mdx @@ -1,9 +1,16 @@ --- -title: Getting Started with Statsig's SDKs -sidebar_label: Getting Started +title: SDK Overview +sidebar_label: SDK Overview slug: /sdks/getting-started --- +import SDKAndFrameworks from '../../src/components/getting-started/SDKAndFrameworks'; +import Styles from '../../src/components/getting-started/Styles'; + + + + + Statsig provides a comprehensive set of SDKs to integrate experimentation, feature flagging, and logging into your applications. With support for over **30 platforms**, Statsig’s SDKs enable you to control feature rollouts and experiments seamlessly, whether you're building for **web**, **mobile**, or **server-side** environments. --- @@ -38,37 +45,7 @@ Additionally, for frameworks like **Next.js** that bridge client and server-side --- -## Client-Side SDKs - -Our client-side SDKs are designed for front-end applications, enabling instant event logging, feature gating, and user assignments. Whether you're building for mobile, web, or other client platforms, Statsig offers SDKs tailored for your needs: - -- [JavaScript](/client/javascript-sdk) -- [React](/client/javascript-sdk/react) -- [React Native](/client/javascript-sdk/react-native) -- [Expo](/client/javascript-sdk/expo) -- [iOS](/client/iosClientSDK) -- [Android](/client/androidClientSDK) -- [.NET](/client/dotnetSDK) -- [Unity](/client/unitySDK) -- [Roku](/client/rokuSDK) -- [C++](/client/cpp-client-sdk) -- [Dart/Flutter](/client/dartSDK) - ---- - -## Server-Side SDKs - -Server-side SDKs allow you to manage experiments and feature flags from your backend, providing more control and reliability. They’re especially useful for server-driven features, background processes, and system events: - -- [Node.js](/server/nodejsServerSDK) -- [Java](/server/javaSdk) -- [Python](/server/pythonSDK) -- [Go](/server/golangSDK) -- [Ruby](/server/rubySDK) -- [.NET](/server/dotnetSDK) -- [PHP](/server/phpSDK) -- [C++](/server/cppSDK) -- [Rust](/server/rustSDK) + --- diff --git a/docs/server/Templates/_SdkMonitoring.mdx b/docs/server/Templates/_SdkMonitoring.mdx index 03328ea8e..720136582 100644 --- a/docs/server/Templates/_SdkMonitoring.mdx +++ b/docs/server/Templates/_SdkMonitoring.mdx @@ -2,7 +2,7 @@ import { VersionBadge } from "../../sdks/_VersionBadge.mdx"; ## SDK Monitoring -The SDK provide an option to integrate with your preferred observability tool to monitor the SDK's behavior and performance. For detailed information and metrics emitted, please see **[sdk monitoring](https://docs.statsig.com/server/concepts/sdk_monitoring)** +The SDK provide an option to integrate with your preferred observability tool to monitor the SDK's behavior and performance. For detailed information and metrics emitted, please see **[sdk monitoring](/sdk_monitoring/)** #### ObservabilityClient interface The SDK provides the following interface methods to track various metrics: diff --git a/docs/server/_server_core.mdx b/docs/server/_server_core.mdx new file mode 100644 index 000000000..d8707d5a4 --- /dev/null +++ b/docs/server/_server_core.mdx @@ -0,0 +1,5 @@ +## Motivation & Background + +PHP-Core is a new server SDK leveraging [Statsig's Server Core](https://github.com/statsig-io/statsig-server-core), a performance-focused evaluation & logging library written in Rust. Early benchmarking suggests Server Core can evaluate Gates and Experiments in a small fraction of the time native SDKs are capable of. + +We plan to offer Server Core across multiple languages in the near future. Want another language sooner? Reach out in our [Slack Channel](https://statsig.com/slack). diff --git a/docs/server/concepts/all_assignments.mdx b/docs/server/concepts/all_assignments.mdx index e83b1f84a..93ac075d9 100644 --- a/docs/server/concepts/all_assignments.mdx +++ b/docs/server/concepts/all_assignments.mdx @@ -15,7 +15,7 @@ For this use-case, we recommend using the `getClientInitializeResponse` server s #### Example of capturing all assignments in Node -Note that this method is designed to [bootstrap](/client/concepts/bootstrapping) the client SDKs, and as such, will hash the experiment and feature keys returned in the payload, obfuscating their names to mitigate the possibility of end-users spoofing into features & gates. You can now provide an optional `hash` parameter, allowing you to disable hashing and capture all group names and values in plain text — [Node](https://github.com/statsig-io/node-js-server-sdk/blob/ea116142221c1aa83b46eff8b5f2292c8f8e2d54/src/StatsigServer.ts#L597), [Python](https://github.com/statsig-io/node-js-server-sdk/blob/ea116142221c1aa83b46eff8b5f2292c8f8e2d54/src/StatsigServer.ts#L597), [Java](https://github.com/statsig-io/java-server-sdk/blob/7443c357c78616142de9257af9e4c55c877ca700/src/main/kotlin/com/statsig/sdk/StatsigServer.kt#L83), [Go](https://github.com/statsig-io/go-sdk/blob/3d7edcbe468efb0fc7a04b0d10202243403dce5f/client.go#L282). +Note that this method is designed to [bootstrap](/client/concepts/initialize#bootstrapping-overview) the client SDKs, and as such, will hash the experiment and feature keys returned in the payload, obfuscating their names to mitigate the possibility of end-users spoofing into features & gates. You can now provide an optional `hash` parameter, allowing you to disable hashing and capture all group names and values in plain text — [Node](https://github.com/statsig-io/node-js-server-sdk/blob/ea116142221c1aa83b46eff8b5f2292c8f8e2d54/src/StatsigServer.ts#L597), [Python](https://github.com/statsig-io/node-js-server-sdk/blob/ea116142221c1aa83b46eff8b5f2292c8f8e2d54/src/StatsigServer.ts#L597), [Java](https://github.com/statsig-io/java-server-sdk/blob/7443c357c78616142de9257af9e4c55c877ca700/src/main/kotlin/com/statsig/sdk/StatsigServer.kt#L83), [Go](https://github.com/statsig-io/go-sdk/blob/3d7edcbe468efb0fc7a04b0d10202243403dce5f/client.go#L282). ```node const assignments = statsig.getClientInitializeResponse(userObj, "client-key", {hash: "none"}); diff --git a/docs/server/concepts/user.mdx b/docs/server/concepts/user.mdx index 2b9a97220..158543f04 100644 --- a/docs/server/concepts/user.mdx +++ b/docs/server/concepts/user.mdx @@ -1,41 +1,83 @@ --- -title: Server StatsigUser Object -sidebar_label: Server StatsigUser Object +title: The StatsigUser Object +sidebar_label: Passing a User to SDKs --- -## Introduction to the StatsigUser object for server and client "on device evaluation" SDKs +import Tabs from "@theme/Tabs"; +import TabItem from "@theme/TabItem"; -When calling APIs that require a StatsigUser object, you should pass as much information as possible in order to take advantage of advanced gate and config conditions (like country or OS/browser level checks), and correctly measure impact of your experiments on your metrics/events. The userID field is required because it's needed to provide a consistent experience for a given user (click [here](/messages/serverRequiredUserID) to understand further why it's important to always provide a userID). -#why-is -## User Attributes +## Introduction to the StatsigUser object + +The user object ("StatsigUser" in our SDKs) is the sole input you provide Statsig to target gates and assign users to experiment variants. Every additional field you add to your StatsigUser is one you can target on, or filter your metrics by - so **we recommend providing as much info as possible**. Statsig can also infer some information about each UserObject based on other traits (for example, we resolve IP Addresses into countries), read on for more details. -| Attributes | Description | Example | -| ----------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------- | -| User ID | ID representing a unique user. This ID will be used to guarantee consistency of targeting for Feature Gates and Experiments and will be used to evaluate experiment results. | `your_user_id` | -| Email | Email of the user | `marcos@statsig.com` | -| User Agent | User agent of the browser. This will be decoded to determine the Browser and Operating System of the user's context | `Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.40 Safari/537.36` | -| IP | IP address of the user | `192.168.1.101` | -| Country | 2 letter country code of the user. Inferred from the IP if not set when evaluating a country condition | `US` | -| Locale | Locale of the user | `en_US` | -| App Version | Version of the app the user is using | `1.0.1` | -| Custom | Dictionary that can contain key/value pairs that can be used for Feature Gate targeting. The content of this dictionary will be stored and available after targeting | `{skill_level: "5", is_subscriber:"false" ...}` | -| Private Attributes| Dictionary that can contain key/value pairs that can be used to evaluate feature gate conditions and segment conditions. The content of this dictionary will **not** be stored after used for targeting and will be removed from any log_event calls | `{sensitive_field: "sensitive_information", ...}` | -| Custom IDs | Dictionary that can contain key/value pairs used as the randomization unit ID for experiments that are set up using these IDs instead of the `User ID` | `{account_id: "23456555", company_id: "company_xyz"}` | +### I passed that attribute before - why do I need to pass it again? -### What fields can I override to set "Operating System" and "Browser" explicitly? +It's important to understand that Statsig evaluates gates and experiment buckets experiments based **only on the information you provide when you check something.** Statsig's promise of evaluating every gate or experiment in milliseconds is dependent on having all information available at request time, rather than searching through previous data. -If you set the userAgent field, server SDKs will parse out the OS/Browser information for evaluating those conditions. But what if you want to explicitly set this yourself? You can set it in two places: either top level in the user object (which typing may not allow for some languages), or in the "custom" object. +## User Attributes -You must provide this information under the following keys: + + + +All user attributes can be explicitly supplied, and some can be inferred from a user's device or connection. Supplying one will always override an inferred value. + +| Key | Description | Example | Client SDK Support | Auto-infer | +|---------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------|--------------------|------------| +| `userID` | ID representing a unique user. This ID will be used to guarantee consistency of targeting for Feature Gates and Experiments and will be used to evaluate experiment results. If `User ID` doesn't exist yet, leave this empty; a `Stable ID` persisted locally will be used for evaluations. | `your_user_id` | All | | +| `email` | Email of the user. | `marcos@statsig.com` | All | | +| `userAgent` | User agent of the browser. This will be decoded to determine the Browser and Operating System of the user's context. Will be inferred if not provided. | `Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.40 Safari/537.36` | Web | ✔ | +| `ip` | IP address of the user. Inferred from the request to /initialize if not provided | `192.168.1.101` | All | ✔ | +| `country` | 2-letter country code of the user. This can be supplied or inferred, and we can target based on the country code in both cases. | `US` | All | ✔ | +| `locale` | Locale of the user. When using the Android or iOS SDK, this will be inferred if not provided. | `en_US` | Mobile | ✔ | +| `appVersion` | Version of the app the user is using. When using the Android or iOS SDK, this will be inferred if not provided. | `1.0.1` | Mobile | ✔ | +| `systemName` | When using our Android/iOS SDKs, this will be automatically assigned, but you can also provide an explicit operating system to override. | `Android` | All | ✔ | +| `systemVersion` | When using our Android/iOS SDKs, this will be automatically assigned, but you can also provide an explicit OS version to override. | `15.4` | All | ✔ | +| `browserName` | When using our Web SDK, this will be automatically assigned, but you can also provide an explicit Browser Name to override. | `Chrome` | Web | ✔ | +| `browserVersion` | When using our Web SDK, this will be automatically assigned, but you can also provide an explicit Browser Version to override. | `45.0` | Web | ✔ | +| `custom` | Dictionary that can contain key/value pairs that can be used for Feature Gate targeting. The content of this dictionary will be stored and available after targeting. | `{subscriber: "yes", ...}` | All | | +| `privateAttributes` | Dictionary that can contain key/value pairs that can be used for Feature Gate targeting. The content of this dictionary will **not** be stored after being used for targeting and will be removed from any `log_event` calls. | `{sensitive_field: "sensitive_information", ...}` | All | | +| `customIDs` | Dictionary that can contain key/value pairs used as the randomization unit ID for experiments that are set up using these IDs instead of the `User ID`. | `{account_id: "23456555", company_id: "company_xyz"}` | All | | + + + + + +All user attributes can be explicitly supplied, and some can be inferred from a provided IP Address or User Agent. Supplying one will always override an inferred value. + +| Attributes | Description | Example | Auto Infer | +| ------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------- |------------| +| `userID` | ID representing a unique user. This ID will be used to guarantee consistency of targeting for Feature Gates and Experiments and will be used to evaluate experiment results. | `your_user_id` | | +| `email` | Email of the user | `marcos@statsig.com` | | +| `userAgent` | User agent of the browser. This will be decoded to determine the Browser and Operating System of the user's context | `Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.40 Safari/537.36` | | +| `ip` | IP address of the user | `192.168.1.101` | | +| `country` | 2 letter country code of the user | `US` | ✔, from IP | +| `locale` | Locale of the user | `en_US` | ✔, from IP | +| `appVersion` | Version of the app the user is using | `1.0.1` | +| `custom` | Dictionary that can contain key/value pairs that can be used for Feature Gate targeting. The content of this dictionary will be stored and available after targeting | `{skill_level: "5", is_subscriber:"false" ...}` | +| `privateAttributes` | Dictionary that can contain key/value pairs that can be used to evaluate feature gate conditions and segment conditions. The content of this dictionary will **not** be stored after used for targeting and will be removed from any log_event calls | `{sensitive_field: "sensitive_information", ...}` | +| `customIDs` | Dictionary that can contain key/value pairs used as the randomization unit ID for experiments that are set up using these IDs instead of the `User ID` | `{account_id: "23456555", company_id: "company_xyz"}` | + + + +### How to override "Operating System" and "Browser" explicitly + +Operating system and Browser are two default targeting options on Statsig, and if you set the userAgent field, server SDKs will parse out the OS/Browser information to evaluate them. If you prefer to explicitly setup this, you can in two places: either top-level in the user object (which typing may not allow in languages), or in the "custom" object. You need to provide this information under the following keys: - Operating System: os_name - OS Version: os_version - Browser Name: browser_name - Browser Version: browser_version -So, for example, you could set this one of two ways in the user object: +As an example, you could set this either of these two ways in the user object: -``` +```json { userID: "uuid", os_name: "Android", // top level @@ -44,8 +86,11 @@ So, for example, you could set this one of two ways in the user object: } } ``` - If either of these fields is explicitly set, it will take precedence over inferring the value from the `userAgent` field. + + + + ### Have sensitive user PII data that should not be logged? @@ -53,27 +98,32 @@ On the StatsigUser object, there is a field called privateAttributes, which is a For example, if you have feature gates that should only pass for users with emails ending in "@statsig.com", but do not want to log your users' email addresses to Statsig, you can simply add the key-value pair `{ email: "my_user@statsig.com" }` to privateAttributes on the user and that's it! -### Why is StatsigUser with a UserID (or any customID) required for server SDKs? +### Time-based conditions -In Server SDKs, a StatsigUser with a userID (or any customID) is required for checkGate, getConfig, and getExperiment. We always recommend using the actual user ID if it's available: users will get a stable experience, and subsequent events will be attributed to the correct users so you can accurately measure downstream metrics. +All SDKs (both server and client-side) support unix timestamps in milliseconds to evaluate time based conditions (After time, before time). Without knowing all possible variations of DateTime formats, we have to normalize on something, so its best to convert your DateTime field into a standard format for evaluation. -Still aren't sure whether you need to provide an ID? Here are our suggestions for different use cases: +We have added support for ISO timestamps to some server SDKs. +- The `java-server` sdk, as of `v1.6.0` supports [DateTime fields in the format of `ISO_INSTANT`](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html#ISO_INSTANT) +- The `go` sdk, as of `v1.12.1` supports [DateTime fields in the format of `RFC3339`](https://pkg.go.dev/time#pkg-constants) +- The `python` sdk supports the usage of epoch time in seconds. using `time.time()` may include sub-second components so you should use round the value to an integer -If you only plan to use feature gates to turn on/off a feature for all of your users, or for all users passing certain conditions (e.g. "country is US"), you can pass any non-empty identifier, hard coded string, or a random ID as the userID if you do not have the actual user ID or any kind of custom IDs. Note that you cannot target the empty string in the statsig console. +### Why is an ID always required for server SDKs? -If you want to rollout a feature partially first, make sure it does not cause significant regressions, then roll out to all users, you should pass the persistent user IDs in your checkGate/getConfig/getExperiment calls, as well as any logEvent calls you make. This way, we are able to attribute the events you log to the correct users who saw or didn't see your new feature, and calculate metrics correctly to help you understand whether there was any regression. +In Server SDKs, a StatsigUser with a userID (or customID) is required for checkGate, getConfig, and getExperiment. In short, its *always* better to pass the user ID if it's available: users will get a stable experience, and any events will be attributed to the correct users so you can accurately measure downstream metrics. -If you want to run an A/B experiment to decide whether to ship a new feature, you should also pass the persistent user IDs (or custom IDs for experiments and feature gates based on other ID types), for the same reason mentioned in 2) above. +Still don't want to pass an ID? Here are our suggestions for different use cases: -If you want to pass a userID for the above reasons, but don't have a logged in user (e.g. you are optimizing the login flow), set a stable identifier as a cookie or in local storage and use that with each call to Statsig. +1. If you plan to only use one/off feature gates, or non-percent-based rules (like countries) +While you're still losing functionality, you can pass any non-empty identifier, hard coded string, or **a random ID less than 100 if you do not have the actual user ID.** Don't pass a purely random ID - as we won't be able to dedupe your events, you'll explode your event usage, and your Statsig bill. -We hope this is helpful. If you have a use case that is not covered in these scenarios, or have any question at all, feel free to join our Slack community and drop a question/comment there! +2. If you want to rollout a feature partially, check for regressions, then roll out to everyone, you must pass an ID in your checkGate/getConfig/getExperiment calls, as well as any logEvent calls you make. Otherwise, we're not able to attribute the events you log to the correct users who saw or didn't see your new feature, or calculate metrics correctly to help you see any regressions. -### Time-based conditions +3. If you want to run an A/B experiment to decide whether to ship a new feature, you should also **pass the persistent user IDs**, for the same reason mentioned in 2, above. -All SDKs support unix timestamps in milliseconds to evaluate time based conditions (After time, before time). Without knowing all possible variations of DateTime formats, we have to normalize on something, so its best to convert your DateTime field into a standard format for evaluation. +4. If you want to pass a userID for the above reasons, but don't have a logged in user (e.g. you are optimizing the login flow), set a stable identifier (we provide one in client SDKs!) as a cookie or in local storage and use that with each call to Statsig. -We have added support for ISO timestamps to some server SDKs. -- The `java-server` sdk, as of `v1.6.0` supports [DateTime fields in the format of `ISO_INSTANT`](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html#ISO_INSTANT) -- The `go` sdk, as of `v1.12.1` supports [DateTime fields in the format of `RFC3339`](https://pkg.go.dev/time#pkg-constants) -- The `python` sdk supports the usage of epoch time in seconds. using `time.time()` may include sub-second components so you should use round the value to an integer +We hope this is helpful. If you have a use case that is not covered in these scenarios, or have any question at all, feel free to join our [Slack community](https://statsig.com/slack) and drop a question/comment there! + +### Pass all IDs when you have them + +A common mistake is to accidentally expose users without a userID to a user-ID based experiment. All of them will be bucketed into one group (whichever the "null" userID goes to), polluting your results. If you run experiments on multiple ID types (or you might one day), its best to pass every identifier available. diff --git a/docs/server/java/_manualExposures.mdx b/docs/server/java/_manualExposures.mdx new file mode 100644 index 000000000..c75197e85 --- /dev/null +++ b/docs/server/java/_manualExposures.mdx @@ -0,0 +1,53 @@ +import CodeBlock from "@theme/CodeBlock"; + +import ManualExposuresTemplate from "../../sdks/_manual-exposures-template.mdx"; + +export const Snippets = { + // Gates + gateSnippet: ( + + {`val passed_or_failed = Statsig.checkGateWithExposureLoggingDisabled(user, "a_gate");`} + + ), + gateExposureSnippet: ( + + {`Statsig.manuallyLogGateExposure(user, "a_gate");`} + + ), + // Configs + configSnippet: ( + + {`val config = Statsig.getConfigWithExposureLoggingDisabled(user, "awesome_product_details")`} + + ), + configExposureSnippet: ( + + {`Statsig.manuallyLogConfigExposure(user, "awesome_product_details");`} + + ), + // Experiment + experimentSnippet: ( + + {`val titleExperiment = Statsig.getExperimentWithExposureLoggingDisabled(user, "new_user_promo_title")`} + + ), + experimentExposureSnippet: ( + + {`Statsig.manuallyLogConfigExposure(user, "new_user_promo_title");`} + + ), + // Layer + layerSnippet: ( + + {`val layer = Statsig.getLayerWithExposureLoggingDisabled(user, "user_promo_experiments") +val promoTitle = layer.getString("title", "Welcome to Statsig!")`} + + ), + layerExposureSnippet: ( + + {`Statsig.manuallyLogLayerParameterExposure(user, "user_promo_experiments");`} + + ), +}; + +; diff --git a/docs/server/javaSdk.mdx b/docs/server/javaSdk.mdx index c7990e8a4..a9e685638 100644 --- a/docs/server/javaSdk.mdx +++ b/docs/server/javaSdk.mdx @@ -40,6 +40,7 @@ import ShutdownSnippet from "./java/_shutdown.mdx"; import OverridesSnippet from "./java/_localOverrides.mdx"; import * as ReferenceSnippets from "./java/_reference.mdx"; import ClientInitResponseSnippet from "./java/_clientInitResponse.mdx"; +import { Snippets as ManualExposureSnippets } from "./java/_manualExposures.mdx"; import ForwardProxyExample from "./java/_forwardProxyExample.mdx"; import MultiInstanceExample from "./java/_multiInstanceExample.mdx"; import PersistentStorageInterface from "./java/_persistentStorageInterface.mdx"; @@ -91,6 +92,7 @@ export const Builder = SDKDocsBuilder({ snippet: , }, ], + [ManualExposures, { addedInVersion: "0.11.0", ...ManualExposureSnippets }], [ UserPersistentStorage, { interface: , example: }, diff --git a/docs/server/php-core/_checkGate.mdx b/docs/server/php-core/_checkGate.mdx new file mode 100644 index 000000000..3fcbc2dec --- /dev/null +++ b/docs/server/php-core/_checkGate.mdx @@ -0,0 +1,7 @@ +```java +use Statsig\Statsig; +use Statsig\StatsigUserBuilder; + +$user = StatsigUserBuilder::withUserID('my_user')->build(); +$passed = $statsig->checkGate($user, 'my_gate'); +``` \ No newline at end of file diff --git a/docs/server/php-core/_getConfig.mdx b/docs/server/php-core/_getConfig.mdx new file mode 100644 index 000000000..a01d78d8c --- /dev/null +++ b/docs/server/php-core/_getConfig.mdx @@ -0,0 +1,4 @@ +```java +$user = StatsigUserBuilder::withUserID('my_user')->build(); +$config = $statsig->getDynamicConfig($user, 'my_config'); +``` \ No newline at end of file diff --git a/docs/server/php-core/_getExperiment.mdx b/docs/server/php-core/_getExperiment.mdx new file mode 100644 index 000000000..3789f67a6 --- /dev/null +++ b/docs/server/php-core/_getExperiment.mdx @@ -0,0 +1,4 @@ +```java +$user = StatsigUserBuilder::withUserID('my_user')->build(); +$xp = $statsig->getExperiment($user, 'an_experiment'); +``` \ No newline at end of file diff --git a/docs/server/php-core/_initialize.mdx b/docs/server/php-core/_initialize.mdx new file mode 100644 index 000000000..cf63cc6a6 --- /dev/null +++ b/docs/server/php-core/_initialize.mdx @@ -0,0 +1,32 @@ +You'll want to add your client secret key to the environment, by adding to a .env file, or directly on the command line: + +```shell +export STATSIG_SECRET_KEY=secret-123456789 +``` + +In the case of the slim framework, you'll also need to add Statsig as a dependency in `app/dependencies.php`. In other frameworks you'll need to initialize elsewhere, [reach out in Slack](https://statsig.com/slack) if you need help. + +```java +// At the top of your file +use Statsig\Statsig; +use Statsig\StatsigOptions; +use Statsig\StatsigLocalFileEventLoggingAdapter; +use Statsig\StatsigLocalFileSpecsAdapter; + +//In the case of slim framework, in container builder definitions: + +Statsig::class => function (ContainerInterface $c) { + $sdk_key = getenv("STATSIG_SECRET_KEY"); + + $options = new StatsigOptions( + null, + null, + new StatsigLocalFileSpecsAdapter($sdk_key, "/tmp"), + new StatsigLocalFileEventLoggingAdapter($sdk_key, "/tmp") + ); + + $statsig = new Statsig($sdk_key, $options); + $statsig->initialize(); + return $statsig; +}, +``` \ No newline at end of file diff --git a/docs/server/php-core/_install.mdx b/docs/server/php-core/_install.mdx new file mode 100644 index 000000000..8de805676 --- /dev/null +++ b/docs/server/php-core/_install.mdx @@ -0,0 +1,34 @@ +:::note +This SDK is in Beta, and some instructions here may change over time. **This guide follows setup in the [slim framework](https://www.slimframework.com/) as an example**, you may need to adjust given your setup. See a full, working example [here](https://github.com/daniel-statsig/statsig-php-core-slim-example). If you need help or would like guidance specific to another framework, reach out in the Statsig [Slack](https://statsig.com/slack). +::: + +### 1. Install and Add as a Dependency +You can install the new PHP Core SDK using composer: + +```shell +composer require statsig/statsig-core-php +``` + +### 2. Add Scripts & Cron Job + +Add post-install and post-update scripts in composer.json: + +```json +"post-install-cmd": [ + "cd vendor/statsig/statsig-core-php && php post-install.php" +], +"post-update-cmd": [ + "cd vendor/statsig/statsig-core-php && php post-install.php" +] +``` + +Next, you'll want to add a script to sync your Statsig configs and flush your events, see example files on Statsig's Github [here](https://github.com/daniel-statsig/statsig-php-core-slim-example/tree/main/bin) + +You'll also want to setup cron jobs to run these scripts periodically: + +```shell +*/10 * * * * /usr/bin/php /var/www/example.com/bin/StatsigSyncConfig.php 1>> /dev/null 2>&1 +*/1 * * * * /usr/bin/php /var/www/example.com/bin/StatsigFlushEvents.php 1>> /dev/null 2>&1 +``` + +Also, be sure to run the StatsigSyncConfig.php cron job at least once before proceeding. \ No newline at end of file diff --git a/docs/server/php-core/_logEvent.mdx b/docs/server/php-core/_logEvent.mdx new file mode 100644 index 000000000..a6950aebe --- /dev/null +++ b/docs/server/php-core/_logEvent.mdx @@ -0,0 +1,4 @@ +```java +$user = StatsigUserBuilder::withUserID('my_user')->build(); +$statsig->logEvent($user, 'an_experiment'); +``` \ No newline at end of file diff --git a/docs/server/php-core/_notes.mdx b/docs/server/php-core/_notes.mdx new file mode 100644 index 000000000..f12fa5b62 --- /dev/null +++ b/docs/server/php-core/_notes.mdx @@ -0,0 +1,3 @@ +## Notes on Beta Version + +The PHP SDK expects an adapter to be provided for both logging and saving config specs, given the stateless nature of PHP. In [our example](https://github.com/daniel-statsig/statsig-php-core-slim-example), we've provided simple file-based adapters. More mature implementations may choose a different, and more performant caching approach. If you need help setting this up, reach out to us in [Slack](https://statsig.com/slack) \ No newline at end of file diff --git a/docs/server/phpCoreSDK.mdx b/docs/server/phpCoreSDK.mdx new file mode 100644 index 000000000..5c13eb2f9 --- /dev/null +++ b/docs/server/phpCoreSDK.mdx @@ -0,0 +1,72 @@ +--- +sidebar_label: PHP Core (Beta) +title: PHP Core Server SDK (Beta) +slug: /server/phpCoreSDK +displayed_sidebar: cloud +--- + +import { + SDKDocsBuilder, + HOOK__SDKDocUpdate, +} from "../sdks/_SDKDocsBuilder.mdx"; + +import * as _ServerCore from "./_server_core.mdx"; +export const ServerCore = _ServerCore; + +import * as _Notes from "./php-core/_notes.mdx"; +export const Notes = _Notes; + +import Install from "./php-core/_install.mdx" +import Initialize from "./php-core/_initialize.mdx" +import CheckGate from "./php-core/_checkGate.mdx"; +import GetConfig from "./php-core/_getConfig.mdx"; +import GetExperiment from "./php-core/_getExperiment.mdx"; +import LogEvent from "./php-core/_logEvent.mdx"; + + +import { + Repository, + GettingStarted, + WorkingWith +} from "./Templates/index.mdx"; + + + +export const Builder = SDKDocsBuilder({ + sections: [ + [ + + Repository, + { + repo: "https://github.com/statsig-io/statsig-server-core", + }, + ], + [ServerCore, {}], + [ + GettingStarted, + { + sdkType: "PHP", + install: , + skipStatsigOptionsDescription: true, + initialize: , + skipInitializeDescription: true, + }, + ], + [ + WorkingWith, + { + checkGate: , + getConfig: , + getExperiment: , + logEvent: , + hideAsyncDisclaimer: true, + }, + ], + [Notes, {}], + ] +}); + +export const toc = Builder.toc + +<>{Builder.result} + diff --git a/docs/server/python/_obClientInterface.mdx b/docs/server/python/_obClientInterface.mdx index ce289f7da..c3ffda034 100644 --- a/docs/server/python/_obClientInterface.mdx +++ b/docs/server/python/_obClientInterface.mdx @@ -45,7 +45,7 @@ class ObservabilityClient: def should_enable_high_cardinality_for_this_tag(self, tag: str) -> bool: """ - Determine if a high cardinality tag should be logged. See the list of high cardinality tags https://docs.statsig.com/server/concepts/sdk_monitoring#metric-tags + Determine if a high cardinality tag should be logged. See the list of high cardinality tags https://docs.statsig.com/sdk_monitoring/ :param tag: The tag to check for high cardinality enabled. """ diff --git a/docs/session-replay/configure.md b/docs/session-replay/configure.md index b01542491..ff1dd33e2 100644 --- a/docs/session-replay/configure.md +++ b/docs/session-replay/configure.md @@ -16,68 +16,17 @@ Click on the settings icon in the top right of the Statsig console to navigate t ![image](https://github.com/statsig-io/docs/assets/3464964/3d4fc8e2-7490-4060-87f5-3aeb5f6dff90) -## Forcing a Recording on Demand +## Advanced: Forcing a Recording on Demand You may have a use case where you need to make sure a session is recorded (based on a trigger, or a particular user that has interesting characteristics or behavior). To do this, we offer the forceStartRecording API which will begin recording as soon as you call it. - - - -```jsx -import { StatsigClient } from '@statsig/js-client'; -import { runStatsigSessionReplay, SessionReplay } from '@statsig/session-replay'; -import { runStatsigAutoCapture } from '@statsig/web-analytics'; - -const client = new StatsigClient(sdkKey, - { userID: "some_user_id" }, - { environment: { tier: "production" } } // optional, pass options here if needed. Session replays are only recorded and stored if the environment is production. -); -runStatsigSessionReplay(client); -runStatsigAutoCapture(client); -await client.initializeAsync(); - -if (someCondition) { - new SessionReplay(client).forceStartRecording(); -} +If you are just getting set up - just follow the installation guide on the previous page. But if there is a specific trigger in your app you want to force recording, you can do the following: ``` - - - -```jsx -import { runStatsigSessionReplay, SessionReplay } from '@statsig/session-replay'; -import { runStatsigAutoCapture } from '@statsig/web-analytics'; -import { StatsigClient, StatsigProvider } from '@statsig/react-bindings'; - -const client = new StatsigClient(sdkKey, - { userID: "some_user_id" }, - { environment: { tier: "production" } } // optional, pass options here if needed. Session replays are only recorded and stored if the environment is production. -); -runStatsigSessionReplay(client); -runStatsigAutoCapture(client); -await client.initializeAsync(); - if (someCondition) { new SessionReplay(client).forceStartRecording(); } - -function App() { - return ( - - - - ); -} ``` - - - ## Configure Recording Privacy/PII Options diff --git a/docs/session-replay/install.mdx b/docs/session-replay/install.mdx index c1cb56de2..d2b7afb2d 100644 --- a/docs/session-replay/install.mdx +++ b/docs/session-replay/install.mdx @@ -71,7 +71,7 @@ yarn add @statsig/session-replay @statsig/web-analytics @statsig/react-bindings We recommend using autocapture as a great way to get started, but if you don’t want to automatically log and send events, you can remove the runStatsigAutoCapture option from the Javascript snippet or skip the `@statsig/web-analytics` package installation. -Next, following the [instructions for the Statsig Javascript SDK](/client/javascript-sdk), initialize Statsig with your SDK key, [user](/client/concepts/user) and options: +Next, following the [instructions for the Statsig Javascript SDK](/client/javascript-sdk), initialize Statsig with your SDK key, [user](/server/concepts/user) and options: ```jsx -import { runStatsigSessionReplay } from '@statsig/session-replay'; -import { runStatsigAutoCapture } from '@statsig/web-analytics'; -import { StatsigClient, StatsigProvider } from '@statsig/react-bindings'; - -const client = new StatsigClient(sdkKey, - { userID: "some_user_id" }, - { environment: { tier: "production" } } // optional, pass options here if needed. Session replays are only recorded and stored if the environment is production. -); -runStatsigSessionReplay(client); -runStatsigAutoCapture(client); -await client.initializeAsync(); +import { StatsigProvider, useClientAsyncInit } from '@statsig/react-bindings'; +import { StatsigSessionReplayPlugin } from '@statsig/session-replay'; +import { StatsigAutoCapturePlugin } from '@statsig/web-analytics'; function App() { return ( - + Loading...} + options={{ + plugins: [ new StatsigSessionReplayPlugin(), new StatsigAutoCapturePlugin() ] + }}> ); @@ -126,6 +124,6 @@ function App() { -As a side effect of creating the SessionReplay, Statsig will begin recording if the session is sampled (see below). +As a side effect of creating the SessionReplay, Statsig will begin recording if the session is sampled (see Configure to learn more). That’s it! In the future, we will be adding robust ways to control other scenarios in which you may or may not want to record the session, but at this time, this is all you need to do. diff --git a/docs/stats-engine/methodologies/one-sided-test.md b/docs/stats-engine/methodologies/one-sided-test.md index 1fb0c7dc6..688efe991 100644 --- a/docs/stats-engine/methodologies/one-sided-test.md +++ b/docs/stats-engine/methodologies/one-sided-test.md @@ -19,7 +19,8 @@ One-sided tests completely disregard the possibility of detecting the metric mov When setting up an experiment and identifying metrics to measure, the default setting is to run a two-sided test. If you want to modify this, simply click on the metric name on the experiment setup screen. This will open a popup where you can modify the test type and indicate a desired direction you seek to measure. Note that our V1 doesn't support Bayesian testing yet. -![image](https://github.com/statsig-io/docs/assets/31516123/8df18328-5248-41a1-8e83-6ee0fb55031d) +![image](https://github.com/user-attachments/assets/23044f21-6249-4fc1-9895-22111bb16010) + ## How to read this diff --git a/docs/statsig-warehouse-native/connecting-your-warehouse/snowflake.md b/docs/statsig-warehouse-native/connecting-your-warehouse/snowflake.md index 5edd07daf..acd09d20f 100644 --- a/docs/statsig-warehouse-native/connecting-your-warehouse/snowflake.md +++ b/docs/statsig-warehouse-native/connecting-your-warehouse/snowflake.md @@ -104,7 +104,7 @@ BEGIN; GRANT CREATE SCHEMA, MONITOR, USAGE ON DATABASE STATSIG_STAGING TO ROLE identifier($role_name); -- ONLY GIVE THIS LEVEL OF ACCESS in the staging schema. - GRANT CREATE TABLE ON SCHEMA STATSIG_STAGING.STATSIG_TABLES TO ROLE identifier($role_name); + GRANT CREATE TABLE, CREATE FUNCTION ON SCHEMA STATSIG_STAGING.STATSIG_TABLES TO ROLE identifier($role_name); GRANT SELECT, UPDATE, INSERT, DELETE ON ALL TABLES IN SCHEMA STATSIG_STAGING.STATSIG_TABLES TO ROLE identifier($role_name); GRANT SELECT, UPDATE, INSERT, DELETE ON FUTURE TABLES IN SCHEMA STATSIG_STAGING.STATSIG_TABLES TO ROLE identifier($role_name); GRANT OWNERSHIP ON FUTURE TABLES IN SCHEMA STATSIG_STAGING.STATSIG_TABLES TO ROLE identifier($role_name); diff --git a/docs/statsig-warehouse-native/guides/metrics.md b/docs/statsig-warehouse-native/guides/metrics.md index 5a1035d56..9ed7004ca 100644 --- a/docs/statsig-warehouse-native/guides/metrics.md +++ b/docs/statsig-warehouse-native/guides/metrics.md @@ -2,6 +2,7 @@ title: Metrics slug: /statsig-warehouse-native/guides/metrics sidebar_label: Metrics +displayed_sidebar: cloud --- # Deprecation Notice diff --git a/docs/statsig-warehouse-native/guides/running_a_poc.mdx b/docs/statsig-warehouse-native/guides/running_a_poc.mdx index 444bba58e..5f6eefea9 100644 --- a/docs/statsig-warehouse-native/guides/running_a_poc.mdx +++ b/docs/statsig-warehouse-native/guides/running_a_poc.mdx @@ -27,7 +27,7 @@ Keep these high level steps in mind as you begin your planning your Warehouse Na - This approach can yield results for analysis in as little as **30 minutes,** assuming data is readily available for ingestion - If your team plans on utilizing the **Assign and Analyze** experimentation option, you’ll want to identify **where** the experiment will run. Typically **web based** experiments are easier to evaluate, however Statsig has SDK support for server and mobile SDKs as well. - **Note**: It’s important the implementing team understands how the SDKs operate prior to executing a proof of concept. Our [client](/client/introduction) and [server](/server/introduction) docs can help orient your team! - - A typical evaluation takes **2-4 weeks** to account for experiment design, implementation, time to bake, and analysis. To ensure a successful POC, [have a well scoped plan](/guides/running-a-poc#phase-0-scope--prepare-your-poc) and ensure the right teams are included to assist along the way. + - A typical evaluation takes **2-4 weeks** to account for experiment design, implementation, time to bake, and analysis. To ensure a successful POC, [have a well scoped plan](/guides/running-a-poc#2-phase-0-scope--prepare-your-poc) and ensure the right teams are included to assist along the way. - Read [experimentation best practices](https://statsig.com/blog/product-experimentation-best-practices) to get an idea of how to best succeed. 1. **Connect the Warehouse** - In order to query data and operate within your warehouse, you’ll need to allocate resources and connect to Statsig. You may choose to utilize an existing prod database or create a separate cluster specifically for experimentation (if you don’t already have one). diff --git a/docs/statsig-warehouse-native/metrics/normalized-metrics.md b/docs/statsig-warehouse-native/metrics/normalized-metrics.md index e3145fa2e..97e68c5a6 100644 --- a/docs/statsig-warehouse-native/metrics/normalized-metrics.md +++ b/docs/statsig-warehouse-native/metrics/normalized-metrics.md @@ -12,7 +12,7 @@ With normal A/B tests the unit of randomization (e.g. UserID) matches the unit o For example - you've added image support to a collaborative commenting feature in your product and want to A/B test it before rollout. You randomize it using businessID. You cannot randomize by userID, since you need everyone within a single business to either have this new feature or not. If you simply compared # of comments per businessID, this data would be skewed by large companies. A business with 1000 employees, but 10 comments would "contribute more" than a business with 5 employees who made 5 comments. Normalizing a metric in this case - is normalizing by users exposed to the experiment. In this instance if 1000 and 5 users were exposed from each business, the first business would have a comments/user rate of 0.01, while the second company would have a comments/user rate of 1. This is reasonable now to compare across companies of many different sizes. ## What it does -Under the covers, normalizing a metric simple creates a ratio metric. The numerator is metric you're normalizing. The denominator is a COUNT DISTINCT of the UnitID you're normalizing to. +Under the covers, normalizing a metric simply creates a ratio metric. The numerator is metric you're normalizing. The denominator is a COUNT DISTINCT of the UnitID you're normalizing to. If you wanted to, you could also create this ratio metric yourself and use it in experiments - this is documented [here](https://docs.statsig.com/metrics/different-id). ## How to do it diff --git a/docs/test_getting-started.mdx b/docs/test_getting-started.mdx deleted file mode 100644 index 472d08ff1..000000000 --- a/docs/test_getting-started.mdx +++ /dev/null @@ -1,356 +0,0 @@ -import Button from "@mui/material/Button"; -import OutlinedCard from "@site/src/components/OutlinedCard"; -import Card from "@mui/material/Card"; -import CardActions from "@mui/material/CardActions"; -import CardContent from "@mui/material/CardContent"; -import CardHeader from "@mui/material/CardHeader"; -import Icon from "@mui/material/Icon"; -import IconButton from "@mui/material/IconButton"; -import Link from "@mui/material/Link"; - -import LogOnClick from "@site/src/components/LogOnClick"; - -import Tabs from "@theme/Tabs"; -import TabItem from "@theme/TabItem"; - -Statsig is the world’s most advanced experimentation platform. It enables your entire company to run experiments collaboratively and quickly, with minimal engineering effort. - -There are two ways to seamlessly integrate Statsig into your product development: - -- **Statsig Cloud:** Set up ***Statsig SDK***, configure **events logging**. Everything else is handled by us. - - You get feature gates and 1 million metered events for free, as well as many analytics tools such as Dashboard, Metrics Explorer, and Insights. Here is [a link](https://www.statsig.com/pricing) to our pricing details. -- **Warehouse Native:** If the events or metrics you want to experiment on are already in your warehouse, you may want to consider [Warehouse Native](/statsig-warehouse-native/introduction) (WHN). - - With WHN, you can host Statsig’s Stats Engine within your own Data Warehouse, calculating metric lifts on your own datasets. This is particularly useful for teams with strong privacy constraints. - - You can use non-Statsig SDKs for feature assignment and provide us exposures in a table (you randomize), or use our SDKs (we randomize and write into your warehouse). The former helps you scale analysis; the latter helps you [10x experimentation velocity](https://www.statsig.com/blog/features-to-10x-experiment-velocity). - - Today’s this option is only available with Enterprise contracts. Check [this link](/statsig-warehouse-native/introduction) for more details or [Schedule a demo](https://www.statsig.com/contact/demo) with our sales team. - -This page gives you an organized overview of how to set up the SDK. SDK can help you log events, and assignments, which will light up **feature gate**, **experiments**, and **analytics** within half an hour. - - -# Overview: What is Statsig, and what is needed to set it up? - -Statsig provides three tightly-integrated core capabilities: - -1. **Experimentation**: Run **AB tests** with minimal engineering efforts (two lines of code), and make decisions as a team with a powerful and intuitive **Console.** -2. **Feature Gating**: **Decouple code deployment** and **feature deployment,** giving you **full control** of your users’ experiences, including the ability to rollout or rollback features in a single click. -3. **User Analytics**: Dashboards, charts, funnels, retention; from logged events to all sorts of business metrics. - -Statsig helps you generate two core outputs — **metrics** and **exposures** — metrics summarizes user behaviors that are meaningful to your business; exposure tells us, and allows us to control, what features each user is exposed to. - -**Logging Events** is the foundation for **metrics**, and **Feature assignment** is the foundation for **exposures**. Statsig has best-in-class **SDKs** built with experimentation as a first-class citizen, that once turned on, can create logging events and feature assignments to power all Statsig features. The SDK is strongly recommended for as it’s reliable, resilient, and have many experimentation best practices built in. - -*You can use Statsig without its SDK. For example, **metrics** can come from raw events table, metric definitions, or a precomputed metrics table; **assignments** can come from 3rd party tool, or in-house assignment tool. Statsig is modulated to work with other tools you are currently using.* - -Now, let’s walk through how to use Statsig SDK to turn on events logging. - -# Three Steps to Turn on Statsig Cloud - -## Step 1. Integrate with our SDK -export const ArrowButton = ({ link }) => ( - - - arrow_forward - - -); - -export const SDKCard = ({ language, image, link }) => ( - -
- {language} -
- - - - -
-); - - - -
- - - - - - - -
-
- -
- - - - - - - -
-
-
- -We also provide an HTTP API. Our API is a great choice if an SDK isn't -available for your environment yet, as you can use it in any type of -application: - -- [HTTP API](/http-api) -## Step 2. Set Up Your First Feature Gate - -Check [this page](/guides/first-feature) for a full walkthrough - -![fg_setup](https://github.com/statsig-io/docs/assets/139815787/4a320c20-c060-4dc2-a493-178f9e7855e9) - -## Step 3. Start Logging Events - -Check [this page](/guides/logging-events) for a full walkthrough - -![logging_setup](https://github.com/statsig-io/docs/assets/139815787/219f2980-6bae-418d-896a-1305d5bb52c2) - -## Hooray! - -Now, **events** are **logged** and passed to Statsig Cloud via the SDK. They will start being computed as metrics in your ***Metrics Catalog***: - -![metrics](https://github.com/statsig-io/docs/assets/139815787/f810a6e4-eca8-4ed4-be96-76f134f14397) - -Which you can add to a [***Dashboard***](/metrics): - -![dashboard](https://github.com/statsig-io/docs/assets/139815787/cd6b2f71-8ca0-4d77-8d7c-a2b081bbbc8e) - -Or even start trying to optimize via ***Experiments***: - -![experiment](https://github.com/statsig-io/docs/assets/139815787/cf61372a-9429-4594-936b-dfea825eacd9) - -# Need more help? - -Statsig strives to provide the best support possible. You can - -- Join our slack support channel for live supports: Join our slack support -- Schedule a live demo: Schedule a demo - -## Walkthrough Guides - -
- - - - - - -
- -## Tools - -
- - - - - - - - - - -
diff --git a/docs/understanding-platform.mdx b/docs/understanding-platform.mdx index 6fd571198..17fcf7f6a 100644 --- a/docs/understanding-platform.mdx +++ b/docs/understanding-platform.mdx @@ -3,6 +3,59 @@ title: Statsig Platform Overview slug: /understanding-platform --- -import Test from "./understanding-the-platform.md"; - \ No newline at end of file +Statsig offers two flexible ways to leverage its core products based on your needs: Statsig Cloud (where we host your data) and Statsig Warehouse Native (where you host your data in your own warehouse). + +--- + +## Statsig Cloud + +With Statsig Cloud, setting up is simple. Install the Statsig SDK and configure event logging—we handle everything else. + +- You get feature flags and 1 million metered events for free. +- Enjoy powerful analytics tools such as the Dashboard, Metrics Explorer, and Insights. +- For more details on the pricing, check [our pricing page](https://www.statsig.com/pricing). + +Statsig Cloud is a great choice for those who want to get started quickly without needing to manage infrastructure or data warehousing. + +--- + +## Statsig Warehouse Native (WHN) + +If your events and metrics already reside in your own data warehouse and you have a dedicated data team, Statsig Warehouse Native (WHN) may be a better option. + +- WHN allows you to host Statsig’s Stats Engine within your warehouse, enabling you to calculate metric lifts on your pre-existing datasets. +- You can choose between two methods: + 1. **Using 3rd party or your own SDKs**: You handle feature assignment and provide us exposure data (you randomize the users). + 2. **Using Statsig SDKs**: We handle randomization and write data into your warehouse for you. + +The first method helps you scale analysis, while the second can 10x your experimentation velocity. + +> Note: WHN is available only with Enterprise contracts. If you’re interested in this option, check [this link](/statsig-warehouse-native/introduction) or [schedule a demo](https://www.statsig.com/contact/demo) with our Sales team. +> + +--- + +## Which Model is Right for You? + +Below is a summary of key criteria to consider when making your decision between the two modes of deployment: + +| Criteria | Cloud-hosted | Warehouse native (WHN) | +| --- | --- | --- | +| Data Source | Primary source of metrics come from Statsig SDKs or CDPs like Segment. Some metrics can still come from a warehouse. | Warehouse is the primary source of metrics, making WHN ideal when wanting to reuse existing data pipelines and computation. | +| Analysis needs | Automated experimentation for every experiment and product launch, especially with metrics derived from event logging. | Flexible analysis on top of your existing source of truth metric data. | +| Data team involvement | Involvement is optional but recommended for experiment design and readouts. | Necessary for setting up the warehouse connection and configuring core metrics, but not mandatory for every experiment. | +| Costs | TCO is slightly lower. No warehouse costs involved. | TCO includes Statsig license + costs incurred for computation and storage in your warehouse. | +| Modularity | An integrated end-to-end platform that spans SDKs for feature rollout, experiment execution, analysis, and experiment readouts. | Modular: You can opt for the integrated end-to-end platform or choose to use only a subset of capabilities, such as assignment or experiment analysis. | + +Still unsure! Read this blog post for further information: [Statsig Cloud vs Warehouse Native](https://www.statsig.com/blog/deciding-cloud-hosted-versus-warehouse-native-experimentation-platforms). + +## Next steps +Once you've decided whether Statsig Cloud or Statsig Warehouse Native fits your organization’s needs, choose the appropriate *getting started* guide for your first use case: + +- [Getting Started with Statsig Cloud](/sdks/getting-started) +- [Getting Started with Statsig Warehouse Native](/statsig-warehouse-native/guides/quick-start) + +:::info +Have a question or need help getting set up? Our Engineering, Data, and Product teams are ready to answer questions in our [Slack community](https://www.statsig.com/slack). +::: diff --git a/docs/understanding-the-platform.md b/docs/understanding-the-platform.md deleted file mode 100644 index aea705b8d..000000000 --- a/docs/understanding-the-platform.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -sidebar_label: Getting Started -title: Getting Started ---- - -Statsig offers two flexible ways to leverage its core products based on your needs: Statsig Cloud (where we host your data) and Statsig Warehouse Native (where you host your data in your own warehouse). - ---- - -## Statsig Cloud - -With Statsig Cloud, setting up is simple. Install the Statsig SDK and configure event logging—we handle everything else. - -- You get feature flags and 1 million metered events for free. -- Enjoy powerful analytics tools such as the Dashboard, Metrics Explorer, and Insights. -- For more details on the pricing, check [our pricing page](https://www.statsig.com/pricing). - -Statsig Cloud is a great choice for those who want to get started quickly without needing to manage infrastructure or data warehousing. - ---- - -## Statsig Warehouse Native (WHN) - -If your events and metrics already reside in your own data warehouse and you have a dedicated data team, Statsig Warehouse Native (WHN) may be a better option. - -- WHN allows you to host Statsig’s Stats Engine within your warehouse, enabling you to calculate metric lifts on your pre-existing datasets. -- You can choose between two methods: - 1. **Using 3rd party or your own SDKs**: You handle feature assignment and provide us exposure data (you randomize the users). - 2. **Using Statsig SDKs**: We handle randomization and write data into your warehouse for you. - -The first method helps you scale analysis, while the second can 10x your experimentation velocity. - -> Note: WHN is available only with Enterprise contracts. If you’re interested in this option, check [this link](/statsig-warehouse-native/introduction) or [schedule a demo](https://www.statsig.com/contact/demo) with our Sales team. -> - ---- - -## Which Model is Right for You? - -Below is a summary of key criteria to consider when making your decision between the two modes of deployment: - -| Criteria | Cloud-hosted | Warehouse native (WHN) | -| --- | --- | --- | -| Data Source | Primary source of metrics come from Statsig SDKs or CDPs like Segment. Some metrics can still come from a warehouse. | Warehouse is the primary source of metrics, making WHN ideal when wanting to reuse existing data pipelines and computation. | -| Analysis needs | Automated experimentation for every experiment and product launch, especially with metrics derived from event logging. | Flexible analysis on top of your existing source of truth metric data. | -| Data team involvement | Involvement is optional but recommended for experiment design and readouts. | Necessary for setting up the warehouse connection and configuring core metrics, but not mandatory for every experiment. | -| Costs | TCO is slightly lower. No warehouse costs involved. | TCO includes Statsig license + costs incurred for computation and storage in your warehouse. | -| Modularity | An integrated end-to-end platform that spans SDKs for feature rollout, experiment execution, analysis, and experiment readouts. | Modular: You can opt for the integrated end-to-end platform or choose to use only a subset of capabilities, such as assignment or experiment analysis. | - -Still unsure! Read this blog post for further information: [Statsig Cloud vs Warehouse Native](https://www.statsig.com/blog/deciding-cloud-hosted-versus-warehouse-native-experimentation-platforms). - -## Next steps -Once you've decided whether Statsig Cloud or Statsig Warehouse Native fits your organization’s needs, choose the appropriate *getting started* guide for your first use case: - -- [Getting Started with Statsig Cloud](/sdks/getting-started.md) -- [Getting Started with Statsig Warehouse Native](/statsig-warehouse-native/guides/quick-start) - -:::info -Have a question or need help getting set up? Our Engineering, Data, and Product teams are ready to answer questions in our [Slack community](https://www.statsig.com/slack). -::: diff --git a/docusaurus.config.ts b/docusaurus.config.ts index 0b0f8ed14..5c1e568e6 100644 --- a/docusaurus.config.ts +++ b/docusaurus.config.ts @@ -144,6 +144,30 @@ const config: Config = { "@docusaurus/plugin-client-redirects", { redirects: [ + { + from: "/client/concepts/bootstrapping", + to: "/sdk_monitoring/", + }, + { + from: "/client/concepts/bootstrapping", + to: "/client/concepts/initialize/#2-bootstrap-initialization", + }, + { + from: "/experiments-plus/experimentation/why-experiment", + to: "/experiments-plus#why-experiment", + }, + { + from: "/experiments-plus/experimentation/scenarios", + to: "/experiments-plus#scenarios-for-experimentation", + }, + { + from: "/experiments-plus/experimentation/common-terms", + to: "/experiments-plus#key-concepts-in-experimentation", + }, + { + from: "/experiments-plus/experimentation/choosing-randomization-unit", + to: "/experiments-plus#choosing-the-right-randomization-unit", + }, { from: "/js-migration", to: "/client/javascript-sdk/migrating-from-statsig-js", @@ -319,7 +343,7 @@ const config: Config = { { to: "/experiments-plus/stop-assignments", from: "/experiments-plus/pause-assignment", - }, + } ], }, ], @@ -363,7 +387,7 @@ const config: Config = { // searchPagePath: 'search', // // Optional: whether the insights feature is enabled or not on Docsearch (`false` by default) - // insights: false, + insights: true, }, navbar: { title: "", diff --git a/sidebars.ts b/sidebars.ts index ce4951011..f72e8ba0e 100644 --- a/sidebars.ts +++ b/sidebars.ts @@ -35,6 +35,7 @@ const sidebars: SidebarsConfig = { className: "lightbulb-icon sidebar-icon", items: [ "understanding-platform", + "client/concepts/parameter-stores", "guides/first-device-level-experiment", "guides/experiment-on-custom-id-types", "guides/using-environments", @@ -55,26 +56,10 @@ const sidebars: SidebarsConfig = { className: "doc-icon sidebar-icon", items: [ "sdks/getting-started", - { - Concepts: [ - "sdks/client-vs-server", - "client/concepts/user", - "server/concepts/monitoring", - "sdks/debugging", - "client/concepts/initialize", - "client/concepts/bootstrapping", - "client/concepts/persistent_assignment", - "client/concepts/parameter-stores", - "messages/serverRequiredUserID", - "server/concepts/user", - "server/concepts/data_store", - "server/concepts/persistent_assignment", - "server/concepts/all_assignments", - "sdk-keys/api-keys", - "sdk-keys/target-apps", - "server/deprecation-notices", - ], - }, + "sdks/client-vs-server", + "server/concepts/user", + "client/concepts/initialize", + "sdks/debugging", { className: "html-icon sidebar-icon sdk-sidebar-icon", type: "doc", @@ -242,16 +227,7 @@ const sidebars: SidebarsConfig = { ] }, - { - type: "category", - label: "Other Frameworks", - items: [ - "guides/node-express-feature-flags", - "guides/node-express-abtests", - "guides/python-flask-feature-flags", - "guides/python-flask-abtests", - ], - }, + { type: "category", label: "Azure AI", @@ -268,6 +244,28 @@ const sidebars: SidebarsConfig = { "azureai/running-experiments", ], }, + { + type: "category", + label: "Advanced SDK Methods", + items: [ + { + type: "category", + label: "Other Frameworks", + items: [ + "guides/node-express-feature-flags", + "guides/node-express-abtests", + "guides/python-flask-feature-flags", + "guides/python-flask-abtests", + ], + }, + "client/concepts/persistent_assignment", + "server/concepts/persistent_assignment", + "server/concepts/data_store", + "sdk-keys/target-apps", + + ] + }, + "server/deprecation-notices", ], }, { @@ -939,6 +937,7 @@ const sidebars: SidebarsConfig = { label: "Workspace Management", items: [ "access-management/introduction", + "sdk-keys/api-keys", { Workspace: [ "access-management/organizations", @@ -1052,7 +1051,11 @@ const sidebars: SidebarsConfig = { ], }, { - Reliability: ["infrastructure/reliability-faq", "guides/uptime"], + Reliability: [ + "infrastructure/reliability-faq", + "guides/uptime", + "infrastructure/monitoring", + ], }, ], }, diff --git a/src/components/getting-started/SDKAndFrameworks.jsx b/src/components/getting-started/SDKAndFrameworks.jsx index 65ff211fb..3bd34f8dd 100644 --- a/src/components/getting-started/SDKAndFrameworks.jsx +++ b/src/components/getting-started/SDKAndFrameworks.jsx @@ -9,6 +9,7 @@ const sdkGroups = [ { name: 'React', img: '/img/sdk/sdk_react.png', link: '/client/javascript-sdk/react' }, { name: 'React Native', img: '/img/sdk/sdk_rn.png', link: '/client/javascript-sdk/react-native' }, { name: 'Next.js', img: '/img/sdk/sdk_nextjs.svg', link: '/client/javascript-sdk/next-js' }, + { name: 'Angular', img: '/img/sdk/sdk_angular.png', link: '/client/javascript-sdk/Angular' }, { name: 'Swift', img: '/img/sdk/sdk_swift.png', link: '/client/iosClientSDK' }, { name: 'Android', img: '/img/sdk/sdk_android.png', link: '/client/androidClientSDK' }, { name: '.NET Client', img: '/img/sdk/sdk_dotnet.png', link: '/client/dotnetSDK' }, @@ -50,7 +51,7 @@ const sdkGroups = [ const SDKItem = ({ name, img, link }) => { const handleClick = () => { - window.__STATSIG__.instance().logEvent({ + window.Statsig.instance().logEvent({ eventName: 'sdk_click', value: name }); diff --git a/src/css/custom.css b/src/css/custom.css index 9708443c1..57a36edba 100644 --- a/src/css/custom.css +++ b/src/css/custom.css @@ -44,14 +44,14 @@ a { /* You can override the default Infima variables here. */ :root { - --ifm-color-primary: #0068b3; + --ifm-color-primary: #1b63d2; --ifm-color-primary-dark: #005693; --ifm-color-primary-darker: #0069b3; --ifm-color-primary-darkest: #006fbd; --ifm-color-primary-light: #0087e7; --ifm-color-primary-lighter: #008df1; --ifm-color-primary-lightest: #129dff; - --ifm-link-color: #0068b3; + --ifm-link-color: #1b63d2; --ifm-code-font-size: 95%; --ifm-font-family-base: "Inter", -apple-system, BlinkMacSystemFont, "Segoe UI", "Roboto", "Helvetica Neue", "Ubuntu", sans-serif; @@ -561,7 +561,7 @@ span.math.math-inline { padding-left:24px; } -[data-theme="dark"] .sidebar-icon::before { +[data-theme="dark"] .sidebar-icon:not(.sdk-sidebar-icon)::before { filter: invert(100%) sepia(0%) saturate(0%) hue-rotate(0deg) brightness(0%) contrast(0%); } diff --git a/src/theme/Layout/index.js b/src/theme/Layout/index.js new file mode 100644 index 000000000..8de5ec7cd --- /dev/null +++ b/src/theme/Layout/index.js @@ -0,0 +1,20 @@ +import React, { useEffect, useLayoutEffect } from 'react'; +import Layout from '@theme-original/Layout'; + +export default function CustomLayout(props) { + useEffect(() => { + + console.log('Page loaded!'); + }, []); + useLayoutEffect(() => { + // Wait a small tick to ensure DOM is fully rendered + setTimeout(() => { + const hasSidebar = document.querySelector('.theme-doc-sidebar-container'); + if (!hasSidebar) { + console.log("doesn't have sidebar") + Statsig.instance().logEvent('NoSidebarPageLoad', window.location.href); + } + }, 0); + }, []); + return ; +} \ No newline at end of file diff --git a/src/theme/NotFound/Content/index.tsx b/src/theme/NotFound/Content/index.tsx new file mode 100644 index 000000000..b4ca1ba00 --- /dev/null +++ b/src/theme/NotFound/Content/index.tsx @@ -0,0 +1,50 @@ +import React, { useEffect } from 'react'; +import clsx from 'clsx'; +import Translate from '@docusaurus/Translate'; +import type {Props} from '@theme/NotFound/Content'; +import Heading from '@theme/Heading'; + +declare const Statsig: { + instance: () => { + logEvent: (eventName: string, value: string) => void; + }; +}; + +export default function NotFoundContent({className}: Props): JSX.Element { + useEffect(() => { + try { + Statsig.instance().logEvent('PageNotFound', window.location.href); + } catch (error) { + } + }, []); + return ( +
+
+
+ + + Page Not Found + + +

+ + We could not find what you were looking for. + +

+

+ + Please contact the owner of the site that linked you to the + original URL and let them know their link is broken. + +

+
+
+
+ ); +} diff --git a/src/theme/NotFound/index.tsx b/src/theme/NotFound/index.tsx new file mode 100644 index 000000000..9eb661dd6 --- /dev/null +++ b/src/theme/NotFound/index.tsx @@ -0,0 +1,21 @@ +import React from 'react'; +import {translate} from '@docusaurus/Translate'; +import {PageMetadata} from '@docusaurus/theme-common'; +import Layout from '@theme/Layout'; +import NotFoundContent from '@theme/NotFound/Content'; + +export default function Index(): JSX.Element { + + const title = translate({ + id: 'theme.NotFound.title', + message: 'Page Not Found', + }); + return ( + <> + + + + + + ); +} diff --git a/static/img/param_stores.gif b/static/img/param_stores.gif new file mode 100644 index 000000000..0bf9cb239 Binary files /dev/null and b/static/img/param_stores.gif differ diff --git a/static/img/param_stores_mapping.png b/static/img/param_stores_mapping.png new file mode 100644 index 000000000..006395b15 Binary files /dev/null and b/static/img/param_stores_mapping.png differ diff --git a/static/img/sdk/sdk_golang.png b/static/img/sdk/sdk_golang.png index 733a2e7e2..d6d98c3bd 100644 Binary files a/static/img/sdk/sdk_golang.png and b/static/img/sdk/sdk_golang.png differ diff --git a/static/img/sdk/sdk_html.png b/static/img/sdk/sdk_html.png index c4f16df23..81d1fcbac 100644 Binary files a/static/img/sdk/sdk_html.png and b/static/img/sdk/sdk_html.png differ diff --git a/static/img/sidecar2ndaction.png b/static/img/sidecar2ndaction.png index 08235bffb..282bf1fcd 100644 Binary files a/static/img/sidecar2ndaction.png and b/static/img/sidecar2ndaction.png differ diff --git a/static/img/sidecaraddaction.png b/static/img/sidecaraddaction.png index 191032914..4c6322cb1 100644 Binary files a/static/img/sidecaraddaction.png and b/static/img/sidecaraddaction.png differ diff --git a/static/img/sidecarconsole.png b/static/img/sidecarconsole.png index 95d5b96af..6fdfd3b53 100644 Binary files a/static/img/sidecarconsole.png and b/static/img/sidecarconsole.png differ diff --git a/static/img/sidecarempty.png b/static/img/sidecarempty.png new file mode 100644 index 000000000..2c12cefc6 Binary files /dev/null and b/static/img/sidecarempty.png differ diff --git a/static/img/sidecarfull.png b/static/img/sidecarfull.png new file mode 100644 index 000000000..73147c727 Binary files /dev/null and b/static/img/sidecarfull.png differ diff --git a/static/img/sidecargetscript.png b/static/img/sidecargetscript.png index 9b9dc138b..43b6e9047 100644 Binary files a/static/img/sidecargetscript.png and b/static/img/sidecargetscript.png differ diff --git a/static/img/sidecarpath.png b/static/img/sidecarpath.png new file mode 100644 index 000000000..2f8b1997b Binary files /dev/null and b/static/img/sidecarpath.png differ diff --git a/static/img/sidecarqa.png b/static/img/sidecarqa.png index 8fe5345a4..e2ad8fb78 100644 Binary files a/static/img/sidecarqa.png and b/static/img/sidecarqa.png differ diff --git a/static/img/sidecarredirect.png b/static/img/sidecarredirect.png index 37f7be56f..0990aa97b 100644 Binary files a/static/img/sidecarredirect.png and b/static/img/sidecarredirect.png differ diff --git a/static/img/sidecarselect.png b/static/img/sidecarselect.png new file mode 100644 index 000000000..20ab021a7 Binary files /dev/null and b/static/img/sidecarselect.png differ diff --git a/static/img/sidecarsettings.png b/static/img/sidecarsettings.png new file mode 100644 index 000000000..7f507e4ca Binary files /dev/null and b/static/img/sidecarsettings.png differ diff --git a/static/img/sidecarstartexp.png b/static/img/sidecarstartexp.png index d170b123c..d92923788 100644 Binary files a/static/img/sidecarstartexp.png and b/static/img/sidecarstartexp.png differ diff --git a/static/img/sidecaruls.png b/static/img/sidecaruls.png deleted file mode 100644 index 9fc5e7b93..000000000 Binary files a/static/img/sidecaruls.png and /dev/null differ diff --git a/static/img/sidecarupdatelt.png b/static/img/sidecarupdatelt.png index 184aca8b4..6da3a6177 100644 Binary files a/static/img/sidecarupdatelt.png and b/static/img/sidecarupdatelt.png differ diff --git a/static/img/sidecarurls.png b/static/img/sidecarurls.png new file mode 100644 index 000000000..de413324f Binary files /dev/null and b/static/img/sidecarurls.png differ