Rasberry Pi rack running clustered Hashicorp datacenter infrastructure (nomad, vault, consul)
This project is heavily inspired by the hashpi ansible scripts from timperrett. Special thanks to him! The hashipi project isn't just a copy of his project but heavily extends it.
In order to follow along with this build, you would need to have the following components available. Because I'm from germany, I link to the german amazon website. In this repo I will not explain on how to use the software from hashicorp. I've written about this on my german blog post.
- 4x Raspberry Pi 3 Model B
- 4x 16GB SDHC cards
- 1x Anker 5-Port powered USB recharger
- 4x Askbork USB-B to USB-micro cable
- 1x Stackable Raspberry Pi Case
- 3x Intermediate plate for the stackable case
- 1x W-Linx 10/100 5 Port Switch (USB Powered)
- 4x small Cat5e cable (e.g. 0.25m or 0.5m). I buy mine from kab24.de
- Assemble the motherboards with the case (instructions from the case)
- Connect the USB power cords and the network cables to the motherboards
These instructions assume you are running Raspbian Lite Stretch (Jessie not tested). The roles included require systemd. You can download Raspbian Lite from here. On Windows you can use Win32DiskImager to copy the image to the sd card. For Mac I've heard etcher from resin.io should be great for copying images to the sd card.
For the rest of the guide I assume a working ansible connection from your client to the pis.
The debug playbook only outputs the default ipv4 / ipv6 addresses from the hosts in the inventory.ini. But it's easy to expand on this if required.
The site playbook will do the following thinks:
- bootstrap the pis (disables avahi-daemon and bluetooth)
- install dnsmasq on every pi
- install consul on 3 nodes for the quorum
- install nomad on the pi in the master group as the server and on every other pi as client
- install vault only on the pi in the master group; uses consul as secure backend
- install docker only on the pis which are also nomad clients
- install hashiui NOT on the pis, requires a cpu with x86 architecture! Therefore the group hashiui
Most of the setup is automatic, except for the vault initialisation. During initialisation vault generates 5 per installation unique keys. The keys are required to unlock vault after e.g. a reboot to unlock it again. The steps for the first setup are documented in this blog post. In short:
ssh <master>
export VAULT_ADDR="http://$(ip -4 route get 8.8.8.8 | awk '{print $7}' | xargs echo -n):8200"
vault operator init
Be sure to keep the generated keys in a safe place, and absolutely do not check them in anywhere!
To unlock vault after a restart:
vault operator unseal -tls-skip-verify
Due to the nature of this project, beeing a test environment, I'm not really testing ssl for the moment. Priority lies on functionality. For production use you absolutly need to use ssl! This components are too important to risk anything.
On the master you can find some nomad example files under /var/lib/nomad/examples.
To run them:
nomad run /var/lib/nomad/examples/nginx.nomad
nomad run /var/lib/nomad/examples/redis.nomad
nomad run /var/lib/nomad/examples/fabio.nomad
This will start 4 nginx and 1 redis docker container and 3 fabio load balancer instances without docker. Check the result with hashiui:3000. In the logs of fabio, visible in hashiui, you can see that it will pick up all in consul registered nginx servers and serve them through port :9999. You can also check the ressource allocation and how much the instances use.