Skip to content

Latest commit

 

History

History
325 lines (231 loc) · 17.5 KB

INSTALLING.md

File metadata and controls

325 lines (231 loc) · 17.5 KB

Install, configure and run services

As listed step-by-step in the Getting Started guide. You must follow the steps in the Getting Started guide to ensure steps not documented here are not missed.


Prerequisites

In the below, the installation of each app is typically one of:

  • use the brew command where provided, or
  • use the link to the website to follow the installation instructions, or
  • follow the link to the Github repo, where you should clone the repo and follow the instructions in the README.md file to install/run (within the repo directory)

Note: when indicating a command that should be run in your terminal, we use the $ prefix to indicate your shell prompt.


Software Install Notes
Java 8 JDK (OpenJDK) $ brew install openjdk@8 Append export PATH="/usr/local/opt/openjdk@8/bin:$PATH" to your shell profile (eg. .zshrc) and restart your terminal.
Maven $ brew install maven
Docker $ brew install --cask docker
Docker Compose $ brew install docker-compose
Cypher Shell $ brew install cypher-shell deprecated (not needed if using Neptune over Neo4j)
nvm Follow the git install instructions Required to allow easy switching between node/npm versions depending on usage within app
Go $ brew install go The Go installation is processor architecture specific. For the newer Apple M1 processor the ARM installation is required. This is managed by Homebrew. However, if installing manually this is something to be aware of. Go direct Download
GoConvey
GhostScript $ brew install ghostscript Required for Babbage
Vault $ brew install hashicorp/tap/vault Required for running Florence.
jq $ brew install jq A handy JSON tool (for debugging website content and much more)
yq $ brew install yq A handy YAML tool
dp-compose $ git clone [email protected]:ONSdigital/dp-compose See dp-compose README for configuration of Docker Desktop resources

dp-compose runs the following services:

  • Services for the Website
    • Elasticsearch 2.4.2
    • Elasticsearch 7 (on non-standard port)
    • Highcharts
    • Postgres
    • MongoDB
    • Kafka (plus required Zookeeper dependency)
  • Services for CMD
    • Elasticsearch 6 (on non-standard port)
    • Neptune)

Return to the Getting Started guide for next steps.


Clone the services

Clone the GitHub repos for web, publishing and/or CMD (Customise My Data).

  • Web - These apps make up the public-facing website providing read-only access to published content, and will be enough strictly to work on website content types other than filterable datasets (e.g. bulletins, articles, timeseries, datasets).

  • Publishing - The "publishing journey" gives you all the features of web together with an internal interface to update, preview and publish content. All content is encrypted and requires authentication.

  • CMD - apps will support the filterable dataset journey, and would mean you have every possible service running.

Web Journey

Publishing Journey

All services listed in the web journey are required for the publishing journey. They are used for the preview functionality.

CMD Journeys

All the services in the [web] and [publishing] journeys, as well as:

Dataset journey:

Import services:

Documentation of the import process

Sequence diagram of cmd import process

Filter journey:

If you have already setup the import journey, you will have the Hierarchy API already. It's still fine to copy the command set below, just be aware that if you hit 1 error for destination path already exists that is expected.

Return to the Getting Started guide for next steps.

Cantabular

dp-compose contains a few stacks for Cantabular services, including the Cantabular import journey and Cantabular metadata publishing

Both of these stacks rely on variations of an scs.sh script, which provides support in cloning, updating and running all the necessary repos for these journeys.

See more information and diagrams


Configuration

Startup file

Some commands require changes to be made to your shell - e.g.

  • to your PATH or
  • to add environment variables - these commands take the form export VAR_NAME=value

and should be appended to the startup file for your shell:

  • for the shell zsh, the startup file is ~/.zshrc
  • for the bash shell, the startup file is ~/.bashrc

When the startup files are updated, to load the new changes into your shell, either:

  • open a new terminal window, or
  • $ exec $SHELL -l

Environment variables

You should put the below env vars in your startup file.

Variable name note
zebedee_root path to your zebedee content, typically the directory the dp-zebedee-content generation script points to when run
ENABLE_PRIVATE_ENDPOINTS set true when running services in publishing, unset for web mode
ENABLE_PERMISSIONS_AUTH set true to ensure that calls to APIs are from registered services or users
ENCRYPTION_DISABLED set true to disable encryption, making data readable for any debugging purposes
DATASET_ROUTES_ENABLED true will enable the filterable dataset routes (the CMD journey) in some services
FORMAT_LOGGING if true then zebedee will format its logs
SERVICE_AUTH_TOKEN a value for zebedee to work

After all the various steps, here's an example set of exports and their values that you might now have in your startup file:

# Dissemination services
export zebedee_root=~/Documents/website/zebedee-content/generated
export ENABLE_PRIVATE_ENDPOINTS=true
export ENABLE_PERMISSIONS_AUTH=true
export ENCRYPTION_DISABLED=true
export DATASET_ROUTES_ENABLED=true
export FORMAT_LOGGING=true
export SERVICE_AUTH_TOKEN="fc4089e2e12937861377629b0cd96cf79298a4c5d329a2ebb96664c88df77b67"

export TRANSACTION_STORE=$zebedee_root/zebedee/transactions
export WEBSITE=$zebedee_root/zebedee/master
export PUBLISHING_THREAD_POOL_SIZE=10

# For CMD services
export GRAPH_DRIVER_TYPE=neptune
export GRAPH_ADDR=wss://localhost:8182/gremlin
export NEPTUNE_TLS_SKIP_VERIFY=true

Return to the Getting Started guide for next steps.


Running the apps

Run dp-compose using the $ ./run.sh command (in the dp-compose repo) to run the supporting services. As well as Vault, e.g. $ vault server -dev.

Most applications can be run using the $ make debug command, but deviations are all documented below:

Web

Run all the services in the web journey

The website will be available at http://localhost:20000

Publishing

Run all of the services in the web journey, but change the commands used to run babbage and zebedee to:

and also run the following:

If you also want to run Florence with the ability to edit images on the homepage (for the Featured Content section), you will need to additionally run:

Florence will be available at http://localhost:8081/florence/login.

The website will be available at http://localhost:8081 after a successful login into florence. Login details are in the florence repository.

CMD

All of the services in the web, publishing and CMD journeys need to be run for the full CMD journey to work. This journey includes importing data, publishing it and testing the public journey.

You will want to make sure you have access to the Neptune test instance as well, if you want the entire CMD journey to be accessible. Details on how to set this up can be found here.

Use the following alternative commands:

CMD Web

If you already have content, and you just want to run the web journey, you'll need the dataset, filter and web services. Again, use the commands:

Return to the Getting Started guide for next steps.


Setup credentials

To run florence, you will need to update the environment variable SERVICE_AUTH_TOKEN in your startup file. Steps for creating the service authentication token can be found in the Zebedee repository.

You will need to restart your terminal for the environment variable change to take effect.

Note that when the first login to a Florence account is detected a mandatory password update is required.