-
Notifications
You must be signed in to change notification settings - Fork 241
Setup Docker Swarm test environment using Multipass
Multipass is a tool to generate cloud-style Ubuntu VMs on Linux, macOS, and Windows.
-
Install Multipass
Multipass’s installation method differs based on one's platforms
-
Prepare a Cloud-init YAML file
docker-cloud-config.yaml
:#cloud-config apt: sources: docker.list: source: deb [arch=amd64] https://download.docker.com/linux/ubuntu $RELEASE stable keyid: 9DC858229FC7DD38854AE2D88D81803C0EBFCD88 packages: - apt-transport-https - ca-certificates - curl - gnupg-agent - software-properties-common - docker-ce - docker-ce-cli - containerd.io # Enable ipv4 forwarding, required on CIS hardened machines write_files: - path: /etc/sysctl.d/enabled_ipv4_forwarding.conf content: | net.ipv4.conf.all.forwarding=1 # create the docker group groups: - docker # Add default auto created user to docker group system_info: default_user: groups: [docker]
to facilitate installation of Docker Engine on launching a new VM.
On Linux,
docker-cloud-config.yaml
have to be located in the user's home directory to be accessible by the Multipass Snap application. -
Create and start a new VM with installed Docker Engine for each node (e.g.):
$ multipass launch -n arc-node -d 10G -m 4G --cloud-init docker-cloud-config.yaml Launched: arc-node
$ multipass launch -n db-node -d 10G -m 4G --cloud-init docker-cloud-config.yaml Launched: db-node
$ multipass launch -n elk-node -d 10G -m 4G --cloud-init docker-cloud-config.yaml Launched: elk-node
with
-
-d 10G
- allocating 10 GB diskspace for the created VM (default: 5G) -
-m 4G
- allocating 4 GB of memory for the created VM (default: 1G)
See
$ multipass launch -h Usage: multipass launch [options] [[<remote:>]<image> | <url>] Create and start a new instance. Options: -h, --help Displays help on commandline options -v, --verbose Increase logging verbosity. Repeat the 'v' in the short option for more detail. Maximum verbosity is obtained with 4 (or more) v's, i.e. -vvvv. -c, --cpus <cpus> Number of CPUs to allocate. Minimum: 1, default: 1. -d, --disk <disk> Disk space to allocate. Positive integers, in bytes, or with K, M, G suffix. Minimum: 512M, default: 5G. -m, --mem <mem> Amount of memory to allocate. Positive integers, in bytes, or with K, M, G suffix. Minimum: 128M, default: 1G. -n, --name <name> Name for the instance. If it is 'primary' (the configured primary instance name), the user's home directory is mounted inside the newly launched instance, in 'Home'. --cloud-init <file> | <url> Path or URL to a user-data cloud-init configuration, or '-' for stdin --network <spec> Add a network interface to the instance, where <spec> is in the "key=value,key=value" format, with the following keys available: name: the network to connect to (required), use the networks command for a list of possible values, or use 'bridged' to use the interface configured via `multipass set local.bridged-network`. mode: auto|manual (default: auto) mac: hardware address (default: random). You can also use a shortcut of "<name>" to mean "name=<name>". --bridged Adds one `--network bridged` network. --mount <local-path>:<instance-path> Mount a local directory inside the instance. If <instance-path> is omitted, the mount point will be the same as the absolute path of <local-path> --timeout <timeout> Maximum time, in seconds, to wait for the command to complete. Note that some background operations may continue beyond that. By default, instance startup and initialization is limited to 5 minutes each. Arguments: image Optional image to launch. If omitted, then the default Ubuntu LTS will be used. <remote> can be either ‘release’ or ‘daily‘. If <remote> is omitted, ‘release’ will be used. <image> can be a partial image hash or an Ubuntu release version, codename or alias. <url> is a custom image URL that is in http://, https://, or file:// format.
for all available options.
-
-
List created instances and add entries in your DNS or hosts file:
$ multipass list Name State IPv4 Image arc-node Running 10.109.53.4 Ubuntu 20.04 LTS 172.19.0.1 172.17.0.1 db-node Running 10.109.53.75 Ubuntu 20.04 LTS 172.17.0.1 172.19.0.1 elk-node Running 10.109.53.139 Ubuntu 20.04 LTS 172.19.0.1 172.17.0.1
$ sudo -i # cat >> /etc/hosts << EOF 10.109.53.4 arc-node 10.109.53.75 db-node 10.109.53.139 elk-node EOF # exit logout
-
Transfer node specific configuration scripts (e.g.
init-arc-node.sh
,init-db-node.sh
,init-elk-node.sh
,docker-compose.yml
, to the new created VMs (e.g.):$ multipass transfer init-arc-node.sh docker-compose.yml docker-compose.env arc-node:
$ multipass transfer init-db-node.sh db-node:
$ multipass transfer init-elk-node.sh elk-node:
-
Open a shell prompt on each instance and run the node specific initialization script as root:
$ multipass shell arc-node Welcome to Ubuntu 20.04.4 LTS (GNU/Linux 5.4.0-124-generic x86_64) ... ubuntu@arc-node$ sudo bash -v ~ubuntu/init-arc-node.sh ...
$ multipass shell db-node Welcome to Ubuntu 20.04.4 LTS (GNU/Linux 5.4.0-124-generic x86_64) ... ubuntu@db-node$ sudo bash -v ~ubuntu/init-db-node.sh ...
$ multipass shell elk-node Welcome to Ubuntu 20.04.4 LTS (GNU/Linux 5.4.0-124-generic x86_64) ... ubuntu@elk-node$ sudo bash -v ~ubuntu/init-elk-node.sh ...
-
Initialize a Docker Swarm in the shell prompt of the node acting as swarm manager:
ubuntu@arc-node:~$ docker swarm init Swarm initialized: current node (reney80x2dbg82aqh5b5130zc) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-4rceln8o0wwcmoqjv67wg17svij7mzj3kkyhrnbk4f1tejz8eu-3lxvhgmmzv1gbmmvpkir4wz9m 10.109.53.4:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
and add other nodes as worker to the swarm by invoking above command in their shell prompts:
ubuntu@db-node:~$ docker swarm join --token SWMTKN-1-4rceln8o0wwcmoqjv67wg17svij7mzj3kkyhrnbk4f1tejz8eu-3lxvhgmmzv1gbmmvpkir4wz9m 10.109.53.4:2377 This node joined a swarm as a worker.
ubuntu@elk-node:~$ docker swarm join --token SWMTKN-1-4rceln8o0wwcmoqjv67wg17svij7mzj3kkyhrnbk4f1tejz8eu-3lxvhgmmzv1gbmmvpkir4wz9m 10.109.53.4:2377 This node joined a swarm as a worker.
-
Now you are ready to create services individually or deploy stacks of services at the node acting as swarm manager (e.g):
ubuntu@arc-node:~$ docker stack deploy -c docker-compose.yml dcm4che Creating network dcm4che_default Creating service dcm4che_arc Creating service dcm4che_elasticsearch Creating service dcm4che_kibana Creating service dcm4che_logstash Creating service dcm4che_ldap Creating service dcm4che_keycloak Creating service dcm4che_oauth2-proxy Creating service dcm4che_db
-
You may stop and restart created VMs by (e.g.)
$ multipass stop arc-node db-node elk-node
$ multipass start arc-node db-node elk-node
-
You may delete and finally purge created VMs by (e.g.)
$ multipass delete -p arc-node db-node elk-node
DCM4CHEE 5 Documentation