Skip to content
chris48s edited this page Jan 31, 2020 · 2 revisions

Deployment

S3 buckets

WhereDoIVote has a number of associated S3 buckets:

Production

  • s3://ons-cache - Copies of data we need from the ONS. We fetch Local Auth boundaries, ONSPD and ONSUD from here when we build a WDIV image.
  • s3://pollingstations-assets2 - Production static assets
  • s3://pollingstations-uploads - Files uploaded by users are saved here
  • s3://pollingstations-data - Known good files are synced here - this is where we pick up data to import
  • s3://pollingstations-packer-assets/ - Private bucket for things we need in the packer build that can't be public. This is where we pick up addressbase from when we build a WDIV image.

Dev

  • s3://pollingstations-uploads-dev - Files uploaded by users are saved here in dev/staging
  • s3://pollingstations-data-dev - Known good files are synced here in dev/staging

Database setup

For performance reasons, we use sharding in production on WDIV. Each front-end/NGINX server also hosts a Postgres instance with its own copy of the address/polling station/district data. This data is essentially read-only. Seperately, there is a shared RDS instance for read/write transactions. This DB connection is called "logger". DB routing logic is in https://github.com/DemocracyClub/UK-Polling-Stations/blob/master/polling_stations/db_routers.py

In order to ensure good performance on server init, we pre-warm critial DB tables on deploy. There is more explanation of this in https://github.com/DemocracyClub/polling_deploy/blob/master/files/init_db.sh The tradeoff of this is that it takes a new instance ~15 mins to become healthy which makes WDIV a bit cumbersome to deploy.