Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

deletes and redirects #2443

Merged
merged 7 commits into from
Dec 26, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 0 additions & 7 deletions docs/experiments-plus/experimentation/best-practices.md

This file was deleted.

This file was deleted.

13 changes: 0 additions & 13 deletions docs/experiments-plus/experimentation/common-terms.md

This file was deleted.

24 changes: 0 additions & 24 deletions docs/experiments-plus/experimentation/scenarios.md

This file was deleted.

11 changes: 0 additions & 11 deletions docs/experiments-plus/experimentation/why-experiment.md

This file was deleted.

2 changes: 1 addition & 1 deletion docs/integrations/data-connectors/segment.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ If you are unable to connect to Segment via OAuth, you can still manually connec
![](https://user-images.githubusercontent.com/1315028/150830169-17564060-816b-4c5c-ade9-10bf6274265a.png)

## Working with Users
Statsig will join incoming user identifiers to whichever [unit of randomization](/experiments-plus/experimentation/choosing-randomization-unit) you choose. This allows you to be flexible with your experimentation and enables testing on known (userID) and unknown (anonymousID) traffic as well as any custom identifiers your team may have (deviceID, companyID, vehicleID, etc).
Statsig will join incoming user identifiers to whichever [unit of randomization](/experiments-plus#choosing-the-right-randomization-unit) you choose. This allows you to be flexible with your experimentation and enables testing on known (userID) and unknown (anonymousID) traffic as well as any custom identifiers your team may have (deviceID, companyID, vehicleID, etc).

### User IDs and Custom IDs

Expand Down
2 changes: 1 addition & 1 deletion docs/metrics/different-id.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ There are two common scenarios where the experiment assignment unit differs from

1. Measuring session-level metrics for a user-level experiment. Ratio metrics are commonly used to solve this (this doc).
2. Measuring logged-in metrics (eg. revenue) on a logged-out experiment. There are two solutions:
a. Running the experiment at the [device-level](/experiments-plus/experimentation/choosing-randomization-unit#other-stable-identifiers), with device-level metrics collected even after the user is logged-in.
a. Running the experiment at the [device-level](/guides/first-device-level-experiment), with device-level metrics collected even after the user is logged-in.
b. Using [ID resolution](/statsig-warehouse-native/features/id-resolution).

We will explain how to set up the first scenario with Warehouse Native in this doc.
Expand Down
2 changes: 1 addition & 1 deletion docs/statsig-warehouse-native/guides/running_a_poc.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Keep these high level steps in mind as you begin your planning your Warehouse Na
- This approach can yield results for analysis in as little as **30 minutes,** assuming data is readily available for ingestion
- If your team plans on utilizing the **Assign and Analyze** experimentation option, you’ll want to identify **where** the experiment will run. Typically **web based** experiments are easier to evaluate, however Statsig has SDK support for server and mobile SDKs as well.
- **Note**: It’s important the implementing team understands how the SDKs operate prior to executing a proof of concept. Our [client](/client/introduction) and [server](/server/introduction) docs can help orient your team!
- A typical evaluation takes **2-4 weeks** to account for experiment design, implementation, time to bake, and analysis. To ensure a successful POC, [have a well scoped plan](/guides/running-a-poc#phase-0-scope--prepare-your-poc) and ensure the right teams are included to assist along the way.
- A typical evaluation takes **2-4 weeks** to account for experiment design, implementation, time to bake, and analysis. To ensure a successful POC, [have a well scoped plan](/guides/running-a-poc#2-phase-0-scope--prepare-your-poc) and ensure the right teams are included to assist along the way.
- Read [experimentation best practices](https://statsig.com/blog/product-experimentation-best-practices) to get an idea of how to best succeed.

1. **Connect the Warehouse** - In order to query data and operate within your warehouse, you’ll need to allocate resources and connect to Statsig. You may choose to utilize an existing prod database or create a separate cluster specifically for experimentation (if you don’t already have one).
Expand Down
16 changes: 16 additions & 0 deletions docusaurus.config.ts
Original file line number Diff line number Diff line change
Expand Up @@ -144,6 +144,22 @@ const config: Config = {
"@docusaurus/plugin-client-redirects",
{
redirects: [
{
from: "/experiments-plus/experimentation/why-experiment",
to: "/experiments-plus#why-experiment",
},
{
from: "/experiments-plus/experimentation/scenarios",
to: "/experiments-plus#scenarios-for-experimentation",
},
{
from: "/experiments-plus/experimentation/common-terms",
to: "/experiments-plus#key-concepts-in-experimentation",
},
{
from: "/experiments-plus/experimentation/choosing-randomization-unit",
to: "/experiments-plus#choosing-the-right-randomization-unit",
},
{
from: "/js-migration",
to: "/client/javascript-sdk/migrating-from-statsig-js",
Expand Down
Loading