diff --git a/docs/source/_static/css/custom_tobi.css b/docs/source/_static/css/custom_tobi.css index b5e47ac15..e6597db65 100644 --- a/docs/source/_static/css/custom_tobi.css +++ b/docs/source/_static/css/custom_tobi.css @@ -3,7 +3,6 @@ div.prompt { display: none; } - /* Classes for the index page. */ .index-card-image { padding-top: 1rem; diff --git a/docs/source/explanations/index.md b/docs/source/explanations/index.md index ba920dd6b..84c609889 100644 --- a/docs/source/explanations/index.md +++ b/docs/source/explanations/index.md @@ -64,61 +64,8 @@ Learn how to calculate different types of standard errors and do sensitivity ana ```` - ````` - - ```{toctree} --- hidden: true diff --git a/docs/source/explanations/inference/bootstrap_montecarlo_comparison.ipynb b/docs/source/explanations/inference/bootstrap_montecarlo_comparison.ipynb index 6e50222fb..8162fc476 100644 --- a/docs/source/explanations/inference/bootstrap_montecarlo_comparison.ipynb +++ b/docs/source/explanations/inference/bootstrap_montecarlo_comparison.ipynb @@ -16,21 +16,6 @@ "The main idea is to repeatedly draw clustered samples, get both uniform and clustered bootstrap estimates in these samples, and then compare how often the true null hypothesis is rejected." ] }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import estimagic as em\n", - "import matplotlib.pyplot as plt\n", - "import numpy as np\n", - "import pandas as pd\n", - "import scipy\n", - "import statsmodels.api as sm\n", - "from joblib import Parallel, delayed" - ] - }, { "cell_type": "markdown", "metadata": {}, @@ -51,6 +36,21 @@ "In the simulations we perform below, we have $\\beta_0 = \\beta_1 =0$. $x_i$ and $x_g$ are drawn from a standard normal distribution, and $\\epsilon_i$ and $\\epsilon_g$ are drawn from a normal distribution with $\\mu_0$ and $\\sigma=0.5$. The value of $\\sigma$ is chosen to not blow up rejection rates in the independent case too much." ] }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import estimagic as em\n", + "import matplotlib.pyplot as plt\n", + "import numpy as np\n", + "import pandas as pd\n", + "import scipy\n", + "import statsmodels.api as sm\n", + "from joblib import Parallel, delayed" + ] + }, { "cell_type": "code", "execution_count": 2, diff --git a/docs/source/getting_started/index.md b/docs/source/getting_started/index.md index fae9cd96d..126e4bae7 100644 --- a/docs/source/getting_started/index.md +++ b/docs/source/getting_started/index.md @@ -105,89 +105,6 @@ Collection of tutorials, talks, and screencasts on estimagic. ````` - - ```{toctree} --- hidden: true diff --git a/docs/source/how_to_guides/index.md b/docs/source/how_to_guides/index.md index 06e86dd81..21f5ca742 100644 --- a/docs/source/how_to_guides/index.md +++ b/docs/source/how_to_guides/index.md @@ -104,91 +104,8 @@ Collection of tutorials, talks, and screencasts on estimagic. ```` - ````` - - ```{toctree} --- hidden: true diff --git a/docs/source/index.md b/docs/source/index.md index fe1624c68..d28cd6562 100644 --- a/docs/source/index.md +++ b/docs/source/index.md @@ -17,7 +17,7 @@ provides functionality to perform statistical inference on estimated parameters. For a complete introduction to optimization in estimagic, check out the {ref}`estimagic_scipy2022` -If you want to know more about estimagic, dive into one of the following topics +If you want to learn more about estimagic, dive into one of the following topics `````{grid} 1 2 2 2 --- @@ -160,120 +160,6 @@ Collection of tutorials, talks, and screencasts on estimagic. ````` - - ```{toctree} --- hidden: true