From 2d00b2f5d3e808eeab1194b9dd12ddd4963a49a9 Mon Sep 17 00:00:00 2001 From: tore-statsig <74584483+tore-statsig@users.noreply.github.com> Date: Thu, 11 Jul 2024 13:32:03 -0700 Subject: [PATCH] Update faq.md --- docs/pulse/faq.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/pulse/faq.md b/docs/pulse/faq.md index 536eb8f83..8258dd3d1 100644 --- a/docs/pulse/faq.md +++ b/docs/pulse/faq.md @@ -14,7 +14,7 @@ There's a number of reasons this can happen: - Random noise, which gets diluted as your sample size gets larger - Within-week seasonality (e.g. an effect is different on Mondays), which gets normalized with more data -- The population that saw the experiment early early on is somehow different than the slower adopters. This happens frequently - a daily user will likely see your experiment before someone who users your product once a month. You can look at the time series view to get more insight on this +- The population that saw the experiment early on is somehow different than the slower adopters. This happens frequently - a daily user will likely see your experiment before someone who users your product once a month. You can look at the time series view to get more insight on this - There was some sort of novelty effect that made the experiment meaningful early on, but fall off. Imagine changing a button - people might click on it early out of curiosity or novelty, but once that effect goes away they'll behave like before. You can use the days-since-exposure view to get more insight on this Best practice for timing is to pick a readout date when you launch your experiment (based on a [power analysis](/experiments-plus/power-analysis)), and to disregard the statistical interpretation of results until then. This is because reading results multiple times before then dramatically increases the rate at which you'll get false positives.