From 5740fcf05f98d643df1e044981b06e8e365b26c3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Paul-Christian=20B=C3=BCrkner?= Date: Mon, 25 Nov 2024 13:57:06 +0100 Subject: [PATCH] more edits on the SIR notebook --- examples/SIR_PosteriorEstimation.ipynb | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/examples/SIR_PosteriorEstimation.ipynb b/examples/SIR_PosteriorEstimation.ipynb index 28d0e81f4..c2c062dcc 100644 --- a/examples/SIR_PosteriorEstimation.ipynb +++ b/examples/SIR_PosteriorEstimation.ipynb @@ -14,6 +14,7 @@ "metadata": {}, "source": [ "## Table of Contents\n", + "\n", " * [Introduction](#introduction)\n", " * [Defining the Generative Model](#defining_the_generative)\n", "\t * [Prior](#prior)\n", @@ -816,6 +817,14 @@ "f = bf.diagnostics.plot_recovery(samples[\"parameters\"], test_sims[\"parameters\"])" ] }, + { + "cell_type": "markdown", + "id": "e64f683b", + "metadata": {}, + "source": [ + "Interestingly, it seems that the parameters $\\theta_1 = \\mu$ and $\\theta_2 = D$ have not been learned properly as they are estimated roughly the same for every simulated datset used during testing. For some models, this might indicate that the the network training had partially failed; and we would have to train longer or adjust the network architecture. For this specific model, however, the reason is different: From the provided observables, these parameters are actually not identified so cannot be learned consistently, no matter the kind of approximator we would use. " + ] + }, { "cell_type": "markdown", "id": "occupational-professor",