Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

epi-scenario-3 #4

Open
djinnome opened this issue Feb 26, 2024 · 4 comments
Open

epi-scenario-3 #4

djinnome opened this issue Feb 26, 2024 · 4 comments

Comments

@djinnome
Copy link

djinnome commented Feb 26, 2024

Scenario 3 [Preparation for Decision-making Confidence Metric]: Supporting decisionmakers at various phases of the Covid-19 pandemic

  1. Pretend that it is November 1st, 2020, in the first year of the Covid-19 pandemic, when the main preventive measure was masking. This timepoint is about a month before vaccines first became available in the United States. You are supporting a federal decisionmaker interested in forecasting what might happen over the next few weeks, and what kinds of interventions would need to be put in place to limit negative population health outcomes (number of Covid-19 cases, hospitalizations, and deaths).

    a. Search and Select Model: Search for and select an appropriate model for this time period and location (United States country level). The model should be able to support decisionmaker questions about masking and social distancing policies, and their impacts on cases, hospitalizations, and deaths. It should not have irrelevant concepts and variables not relevant to the time period (in other words, there should not be anything related to vaccination and multiple variants of Covid). Model Requirements:

    • can support masking and social distancing interventions
    • outputs cases, hospitalizations, deaths

    b. Please provide information about the literature corpus or git repositories you searched over to find this model.
    c. What are the assumptions, limitations, and strengths of the chosen model?
    d. Model Comparison: What are the key differences between the chosen model and one other candidate model from the literature or other sources? In your answer, include:

    • differences in model assumptions
    • comparison of limitations and strengths
    • structural differences
    • explanation of why you selected the chosen model over the alternative

    e. Model Comparison: Consider MechBayes, a well-performing model (according to WIS score for forecasted deaths, for November 2020) submitted to the CDC ForecastHub for this time period. See MechBayes code repository and model specification. What are the key differences between the model chosen in 1a, and MechBayes? In your answer, include:

    • differences in model assumptions
    • comparison of limitations and strengths
    • structural differences

    f. Given the differences between your chosen model and the ForecastHub model, how well do you expect your model will perform in comparison, for a near-term forecasting task?
    g. Find Parameters: Find relevant parameter values for the chosen model (relevant to this time period and for the United States at a national level), and fill in the following information about sources and quality. If relevant, you may include multiple rows for the same parameter (e.g. perhaps you find different values from different reputable sources), with a ‘summary’ row indicating the final value or range of values you decide to use.

Parameter Parameter Definition Parameter Units Parameter Value or Range Uncertainty Characterization Sources Modeler assessment on source quality

h. Model Extraction: Extract the chosen model from the source material. Time the entire process to extract the model and curate the results until you are confident the model represented in the workbench is correct.
i. Single Model Forecast: Now use the extracted model in the workbench to do a 4-week forecast of cases, hospitalizations, and deaths, from the starting date of November 1st, 2020.

  • For each outcome, plot the 4-week forecast trajectory from your selected model, the 4-week forecast trajectory from the ForecastHub model considered in 1e (MechBayes), and the actual observational data for this period. How do the trajectories compare? For actual observational data to evaluate your forecasts, use the sources described here: https://github.com/reichlab/covid19-forecast-hub/blob/master/data-truth/README.md. You can find forecast data from all models in the ForecastHub here: https://github.com/reichlab/covid19-forecast-hub/tree/master/data-processed
  • In comparing with observational data, calculate Absolute Error and optionally any other error metrics used by the CDC Forecasting Hub, as described here: https://delphi.cmu.edu/forecast-eval/.
  • If your model forecast did not perform as expected, why do you think that is?
  • (Optional) If your model forecast was not optimal, consider going back to 1g, and calibrating some of the more uncertain parameters based on historical data from right before the forecast period
@djinnome
Copy link
Author

2. Comparing and Optimizing Interventions: Still considering the same timepoint of November 1st, 2020, the decisionmaker you’re supporting is exploring masking and social distancing policies.

  • Assume the mask policy will have an 80% compliance rate, and results in 75% decrease in transmissibility parameters for the compliant population.
  • Assume the social distancing policy will have a 50% compliance rate, and results in a 50% decrease in contact rates between individuals in the compliant population.
  • Assume each policy goes into effect right at the start of the period.

a. What is the impact of each policy by itself, on the trajectories for Covid-19 cases, hospitalizations, and deaths, over the next 8 weeks?
b. Now assume that you have the flexibility to choose the start dates for these policies. Considering each policy by itself, when is the latest each could be implemented, in order to ensure that hospitalizations never exceed a national threshold of 60k during November and December of 2020?
c. Now considering a combination of policies, when is the latest each could be implemented in order to ensure that hospitalizations never exceed a national threshold of 60k during November and December of 2020?
d. If there was uncertainty in the results, what is the source, and is the growth of uncertainty over time as expected?

@djinnome
Copy link
Author

djinnome commented Feb 26, 2024

3. Fast forward to July 15th, 2021, during the upswing of the Covid wave caused by the arrival of the Delta variant. Vaccines were available at this time. Do not consider specific demographic groups for this question.

a. Model Update: Now update the selected model from Q1 to include vaccinations and be able to support interventions around vaccinations (e.g. incorporate a vaccination policy or requirement, which increases rate of vaccination). Please be sure to use the logging features of Terarium to ensure that we get accurate timing information.
b. Model Comparison: Please explain how the model was updated and how it compares to the original starting model from Q1-2. Provide model comparison diagrams where appropriate.
c. Find Parameters: Considering the updated model with additional variables, and new time period, what is the updated parameter table that you will be using?
Parameter Parameter Definition Parameter Units Parameter Value or Range Uncertainty Characterization Sources Modeler assessment on source quality
d. Model Checks: Implement common sense checks on the model structure and parameter space to ensure the updated model and parameterization makes physical sense. Explain the checks that were implemented. For example, under the assumption that the total population is constant for the time period considered:

  • Demonstrate that population is conserved across all categories. If there are birth and death processes in the model, ensure these are accounted for in your assessment of population conservation.
  • Ensure that the total unvaccinated population over all states in the model, can never increase over time, and the total vaccinated population over all states in the model, can never decrease over time.
  • What other common-sense checks did you implement? Are there others you would have liked to implement but were too difficult?

e. Single Model Forecast: Now use the updated model to do a 4-week forecast of cases, hospitalizations, and deaths, from the new date, July 15th, 2021. How do the results compare with forecasts from MechBayes for the same 4-week time period?

@djinnome
Copy link
Author

djinnome commented Feb 26, 2024

4. Stratification Challenge:

Still considering the same timepoint as Q3, the decisionmaker you’re supporting is exploring targeted vaccination policies to boost vaccination rates for specific subpopulations. To support these questions, you decide to further extend the model by considering several demographic subgroups, as well as vaccination dosage. Stratify the model by the following dimensions:

  • Vaccination dosage (1 or 2 doses administered)
  • Age group
  • Sex
  • Race/Ethnicity

To inform initial conditions and rates of vaccination, efficacy of vaccines, etc., consider the subset of vaccination datasets from the starter kit listed in Scenario3_VaccinationDatasets.xlsx. Where initial conditions are not available for a specific subgroup, make a reasonable estimate based on percentages from Census sources (e.g. https://www.census.gov/quickfacts/fact/table/US/PST045223). Where parameters for specific subgroups are unavailable, generalize based on the ones that are available. Choose the number of age and race/ethnicity groups based on the data that is available.

@djinnome
Copy link
Author

Waiting for an AMR from TA2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

When branches are created from issues, their pull requests are automatically linked.

1 participant