-
Notifications
You must be signed in to change notification settings - Fork 5
CCS Testing
One of the primary emergent behaviors we want to see in our model is Critical Community Size. Perhaps the best discussion of that phenomenon is Kurt Frey's PPT. (Link coming.)
In short, for a measles-like disease, we expect nodes with populations above, say, 250k to experience endemicity, while populations below that we expect to eliminate. This is all based on non-intervention immunity (for now).
It's easiest to test this in a single node simulation run over and over with different input parameters, but running a multinode model with no migration and different initial conditions is arguably a clever way to accomplish the same thing.
We propose initially testing this using a Simplified Synthetic Spatial Scenario. Such a model will consist of ~100 nodes with varying population sizes. (100 allows for reasonable 1-D visualization in the console while also testing a good range of parameter values at once.) They will be isolated from each other by zero-ing migration. They will have vital dynamics (births and deaths). Each will be initially seeded. Potentially they can be reseeded upon elimination. It can take time for a good steady-state to emerge such that elimination/endemecity results are considered valid.
We expect the discovered CCS threshold to vary with R0/Base Infectivity and Birth Rates.
Here's a screenshot of a sparklines UI of 100 nodes after almost 1000 timesteps. The left-most node has a population of just under 400k. The right-most node has a population of about 4000. Base Infectivity was 50 and CBR was 30 (both high). The height of the bars represents prevalence. Going from left to right, larger population to smaller, we infer that the probability of elimination starts off at essentially 0, and then starts to decline, and is at 0 for the right third of the populations.
(The above figure may or may not be a quantitatively accurate representation of the current behavior of any particular model or codebase. It should be treated as for illustrative purposes.)
Here is a visualization of the detected CCS threshold from a toy model for a 2D sweep of base infectivity and crude birth rate:
Assuming we can create not just a manual test but also an automated one, in which we can determine the CCS threshold for a given set of input parameters, the question then is: what is the expected value of the CCS Threshold for those input values?
To that end we create a toy model of a measles-like disease. The output of this model -- in terms of the detected CCS population threshold for a given set of input values -- will be deemed the correct value and used for testing candidate models.
Do we agree with that approach? With what accuracy?
-
A measles-like disease without seasonality seems to have a CCS threshold closer to 450k than 250k with a CBR in the 15-20 region.
-
It seems that effective CCS threshold is quite sensitive to the outbreak seed, or at least there seem to be different regions of outbreak seed values. Details to come.
-
I have added seasonality and maternal immunity to the toy model so the model under test can be tested with those features present. I also need to add mortality. And stochasticity.
It seems that seasonality should (and does) have a strong impact on CCS threshold due to the low season following the high season. It would seem that even the relatively mild seasonality of England might push the CCS threshold closer to 950k (TO BE CONFIRMED), possibly leaving London as the only city/location truly endemic.
One example of some interesting outputs... beta=2 cbr=15 (?) initial population=275000
Theory Model
Test Model
These values are just above detected threshold. Both have endemicity. Note that test model has a different periodicity. Comparing periodicities is also worthwhile though out of scope of this article.