As listed step-by-step in the Getting Started guide. You must follow the steps in the Getting Started guide to ensure steps not documented here are not missed.
In the below, the installation of each app is typically one of:
- use the
brew
command where provided, or - use the link to the website to follow the installation instructions, or
- follow the link to the Github repo, where you should clone the repo and follow the instructions in the
README.md
file to install/run (within the repo directory)
Note: when indicating a command that should be run in your terminal, we use the $
prefix to indicate your shell prompt.
Software | Install | Notes |
---|---|---|
Java 8 JDK (OpenJDK) | $ brew install openjdk@8 |
Append export PATH="/usr/local/opt/openjdk@8/bin:$PATH" to your shell profile (eg. .zshrc ) and restart your terminal. |
Maven | $ brew install maven |
|
Docker | $ brew install --cask docker |
|
Docker Compose | $ brew install docker-compose |
|
Cypher Shell | $ brew install cypher-shell |
deprecated (not needed if using Neptune over Neo4j) |
nvm | Follow the git install instructions | Required to allow easy switching between node/npm versions depending on usage within app |
Go | $ brew install go |
The Go installation is processor architecture specific. For the newer Apple M1 processor the ARM installation is required. This is managed by Homebrew. However, if installing manually this is something to be aware of. Go direct Download |
GoConvey | ||
GhostScript | $ brew install ghostscript |
Required for Babbage |
Vault | $ brew install hashicorp/tap/vault |
Required for running Florence. |
jq | $ brew install jq |
A handy JSON tool (for debugging website content and much more) |
yq | $ brew install yq |
A handy YAML tool |
dp-compose | $ git clone [email protected]:ONSdigital/dp-compose |
See dp-compose README for configuration of Docker Desktop resources |
dp-compose runs the following services:
- Services for the Website
- Elasticsearch 2.4.2
- Elasticsearch 7 (on non-standard port)
- Highcharts
- Postgres
- MongoDB
- Kafka (plus required Zookeeper dependency)
- Services for CMD
- Elasticsearch 6 (on non-standard port)
- Neptune)
Return to the Getting Started guide for next steps.
Clone the GitHub repos for web, publishing and/or CMD (Customise My Data).
-
Web - These apps make up the public-facing website providing read-only access to published content, and will be enough strictly to work on website content types other than filterable datasets (e.g. bulletins, articles, timeseries, datasets).
-
Publishing - The "publishing journey" gives you all the features of web together with an internal interface to update, preview and publish content. All content is encrypted and requires authentication.
-
CMD - apps will support the filterable dataset journey, and would mean you have every possible service running.
-
dp-frontend-feedback-controller
git clone [email protected]:ONSdigital/babbage git clone [email protected]:ONSdigital/zebedee git clone [email protected]:ONSdigital/sixteens git clone [email protected]:ONSdigital/dp-frontend-router git clone [email protected]:ONSdigital/dp-frontend-homepage-controller git clone [email protected]:ONSdigital/dp-frontend-cookie-controller git clone [email protected]:ONSdigital/dp-frontend-dataset-controller git clone [email protected]:ONSdigital/dp-frontend-feedback-controller
All services listed in the web journey are required for the publishing journey. They are used for the preview functionality.
-
git clone [email protected]:ONSdigital/florence git clone [email protected]:ONSdigital/The-Train git clone [email protected]:ONSdigital/dp-api-router git clone [email protected]:ONSdigital/dp-image-api git clone [email protected]:ONSdigital/dp-image-importer git clone [email protected]:ONSdigital/dp-upload-service git clone [email protected]:ONSdigital/dp-download-service
All the services in the [web] and [publishing] journeys, as well as:
-
dp-frontend-filter-dataset-controller
git clone [email protected]:ONSdigital/dp-dataset-api git clone [email protected]:ONSdigital/dp-frontend-filter-dataset-controller
-
dp-publishing-dataset-controller
git clone [email protected]:ONSdigital/dp-recipe-api git clone [email protected]:ONSdigital/dp-import-api git clone [email protected]:ONSdigital/dp-upload-service git clone [email protected]:ONSdigital/dp-import-tracker git clone [email protected]:ONSdigital/dp-dimension-extractor git clone [email protected]:ONSdigital/dp-dimension-importer git clone [email protected]:ONSdigital/dp-observation-extractor git clone [email protected]:ONSdigital/dp-observation-importer git clone [email protected]:ONSdigital/dp-hierarchy-builder git clone [email protected]:ONSdigital/dp-hierarchy-api git clone [email protected]:ONSdigital/dp-dimension-search-builder git clone [email protected]:ONSdigital/dp-publishing-dataset-controller
Documentation of the import process
Sequence diagram of cmd import process
If you have already setup the import journey, you will have the Hierarchy API already. It's still fine to copy the command set below, just be aware that if you hit 1 error for destination path already exists
that is expected.
-
git clone [email protected]:ONSdigital/dp-dimension-search-api git clone [email protected]:ONSdigital/dp-code-list-api git clone [email protected]:ONSdigital/dp-hierarchy-api git clone [email protected]:ONSdigital/dp-filter-api git clone [email protected]:ONSdigital/dp-dataset-exporter git clone [email protected]:ONSdigital/dp-dataset-exporter-xlsx
Return to the Getting Started guide for next steps.
dp-compose contains a few stacks for Cantabular services, including the Cantabular import journey and Cantabular metadata publishing
Both of these stacks rely on variations of an scs.sh
script, which provides support in cloning, updating and running all the necessary repos for these journeys.
See more information and diagrams
Some commands require changes to be made to your shell - e.g.
- to your
PATH
or - to add environment variables - these commands take the form
export VAR_NAME=value
and should be appended to the startup file for your shell:
- for the shell
zsh
, the startup file is~/.zshrc
- for the
bash
shell, the startup file is~/.bashrc
When the startup files are updated, to load the new changes into your shell, either:
- open a new terminal window, or
$ exec $SHELL -l
You should put the below env vars in your startup file.
Variable name | note |
---|---|
zebedee_root |
path to your zebedee content, typically the directory the dp-zebedee-content generation script points to when run |
ENABLE_PRIVATE_ENDPOINTS |
set true when running services in publishing, unset for web mode |
ENABLE_PERMISSIONS_AUTH |
set true to ensure that calls to APIs are from registered services or users |
ENCRYPTION_DISABLED |
set true to disable encryption, making data readable for any debugging purposes |
DATASET_ROUTES_ENABLED |
true will enable the filterable dataset routes (the CMD journey) in some services |
FORMAT_LOGGING |
if true then zebedee will format its logs |
SERVICE_AUTH_TOKEN |
a value for zebedee to work |
After all the various steps, here's an example set of exports and their values that you might now have in your startup file:
# Dissemination services
export zebedee_root=~/Documents/website/zebedee-content/generated
export ENABLE_PRIVATE_ENDPOINTS=true
export ENABLE_PERMISSIONS_AUTH=true
export ENCRYPTION_DISABLED=true
export DATASET_ROUTES_ENABLED=true
export FORMAT_LOGGING=true
export SERVICE_AUTH_TOKEN="fc4089e2e12937861377629b0cd96cf79298a4c5d329a2ebb96664c88df77b67"
export TRANSACTION_STORE=$zebedee_root/zebedee/transactions
export WEBSITE=$zebedee_root/zebedee/master
export PUBLISHING_THREAD_POOL_SIZE=10
# For CMD services
export GRAPH_DRIVER_TYPE=neptune
export GRAPH_ADDR=wss://localhost:8182/gremlin
export NEPTUNE_TLS_SKIP_VERIFY=true
Return to the Getting Started guide for next steps.
Run dp-compose using the $ ./run.sh
command (in the dp-compose repo) to run the supporting services. As well as Vault, e.g. $ vault server -dev
.
Most applications can be run using the $ make debug
command, but deviations are all documented below:
Run all the services in the web journey
- babbage - use:
$ ./run.sh
- zebedee - use:
$ ./run-reader.sh
- sixteens - use:
$ ./run.sh
- dp-frontend-router
- dp-frontend-homepage-controller
- dp-frontend-cookie-controller
- dp-frontend-dataset-controller
- dp-frontend-feedback-controller
The website will be available at http://localhost:20000
Run all of the services in the web journey, but change the commands used to run babbage and zebedee to:
and also run the following:
- florence - use:
$ make debug ENCRYPTION_DISABLED=true
- The-Train - use:
$ ./run.sh
- dp-api-router
If you also want to run Florence with the ability to edit images on the homepage (for the Featured Content section), you will need to additionally run:
- dp-image-api
- dp-image-importer - use:
make debug ENCRYPTION_DISABLED=true
- dp-upload-service - use
make debug ENCRYPTION_DISABLED=true
- dp-download-service - use:
make debug ENCRYPTION_DISABLED=true
Florence will be available at http://localhost:8081/florence/login.
The website will be available at http://localhost:8081 after a successful login into florence. Login details are in the florence repository.
All of the services in the web, publishing and CMD journeys need to be run for the full CMD journey to work. This journey includes importing data, publishing it and testing the public journey.
You will want to make sure you have access to the Neptune test instance as well, if you want the entire CMD journey to be accessible. Details on how to set this up can be found here.
Use the following alternative commands:
- florence - use:
$ make debug ENCRYPTION_DISABLED=true
- dp-frontend-router - use:
$ make debug DATASET_ROUTES_ENABLED=true
- For every service in dataset and filter- use:
make debug ENABLE_PRIVATE_ENDPOINTS=true
- dp-dimension-extractor - use:
$ make debug ENCRYPTION_DISABLED=true
- dp-observation-extractor - use
$ make debug ENCRYPTION_DISABLED=true
If you already have content, and you just want to run the web journey, you'll need the dataset, filter and web services. Again, use the commands:
- florence - use:
$ make debug ENCRYPTION_DISABLED=true
- dp-frontend-router - use:
$ make debug
- unset
ENABLE_PRIVATE_ENDPOINTS
Return to the Getting Started guide for next steps.
To run florence, you will need to update the environment variable SERVICE_AUTH_TOKEN
in your startup file. Steps for creating the service authentication token can be found in the Zebedee repository.
You will need to restart your terminal for the environment variable change to take effect.
Note that when the first login to a Florence account is detected a mandatory password update is required.