Centre for Net Zero's agent-based model (ABM) for the electrification of domestic heating in England and Wales.
To find out more about our approach and results read our report Hitting the Target: interventions required to meet UK Government heat pump targets.
You need Python 3.9 and pipenv
.
If you don't have them, see our instructions for macOS.
- Clone this repo.
pipenv sync --dev
to install dependencies.cp .env.template .env
pipenv run pytest
You configure and run the simulation using a command line interface.
Print the help message to see all the options:
python -m simulation -h
The simulation initialises household agents using data from a Parquet file or BigQuery query. The simulation history is written to a file with one JSON object per line.
For more details on the datasets we combine to generate the household data and the final schema, visit centrefornetzero/domestic-heating-data
.
Parquet input:
python -m simulation households.parquet history.jsonl
BigQuery:
python -m simulation --bigquery "select * from project.prod_domestic_heating.dim_household_agents" history.jsonl
We collect data from the environment and agents at each timestep of the simulation and write it as a newline-delimited JSON-encoded object in the history file.
You can use read_jsonlines
to read the history file and history_to_dataframes
to convert it to pandas DataFrames.
We run the simulation with different configurations, called scenarios, to see how interventions affect the choices households make about their heating systems. We also run sensitivity tests to understand how changing each parameter affects the results. Since the simulation is probabilistic, we run each of these configurations multiple times and compute the average outcome.
Scenarios and sensitivity tests are defined in k8s/job.jsonnet
.
We run all the scenarios and sensitivity tests for every commit on the main
branch.
A Github Action uses job.jsonnet
to generate a Kubernetes job configuration and applies it to a Google Kubernetes Engine Autopilot cluster.
We monitor the jobs via a GCP Monitoring dashboard.
After the jobs complete we download the history files from Google Cloud Storage.
We acknowledge Agents.jl, whose API design inspiried us for abm.py
.