Are you hosting your applications on kubernetes ? Is the load distributed across multiple clusters? Do you switch between kubectl contexts often to view and update your application?
Kubernetes is a great choice for orchestrating containers. Rancher Lab has developed a tool to help us manage multi-cluster from both on-premise and in the cloud. The tool focuses on providing reliable and easy ways to access, deploy, audit, backup, upgrade and observe applications on our clusters.
Within oracle cloud infrastructure always free services limits, we will explore a way to deploy the Rancher Dashboard at zero cost with high availability and backup & restore capabilities.
- Oracle Cloud Infrastructure paid account. Resources like Network Load Balancer are only available on the paid account, and have free service limits.
- S3 storage account
- Own a DNS entry
- Setup a Reserved Public IP and redirect a DNS to your public IP
- Knowledge of Kubernetes, Terraform, SSH
- Terraform: provision the infrastructure as code
- Ansible : configuration management and application deployment functionality as code
- K3S: a lightweight kubernetes distribution design by rancher lab for production load everywhere
- Rancher dashboard: a kubernetes dashboard design by rancher lab
- Nginx ingress: a kubernetes ingress base on nginx
- Cert manager: a tool to manage certificate with letsencrypt
The default settings use the maximum limits provided by the always free services, customize them to fit your needs.
In Oracle Cloud infrastructure, we are going to use terraform to deploy a Virtual Cloud Network with two subnets.
- The first subnet will be public and will have an Internet Gateway, it will contain the Network Load Balancer and Oracle Cloud Bastion.
- The second subnet will be private and will have a NAT Gateway, all VMs will be hosted in this subnet.
We will provide two pools of instances. One for the k3s master nodes and the other one for k3s agent nodes. The VMs will be distributed throughout the data center area to increase availability. Access to these VMs via SSH will only be through an Oracle Cloud Bastion.
HTTP and HTTPS will be load balance on k3s agent nodes using the Network Load Balancer.
Unfortunately, we couldn't use Network Load Balancer to distribute request between k3s master nodes for high availability. The tool is not able to distribute request from and to the same VM. We end up, providing an additional private IP will attach to the first k3s master node. This IP will be shared among k3s master nodes using keepalived. A dynamic IAM group and IAM policy will allow k3s master nodes to manage the allocation of the shared IP.
Oracle cloud added Ansible to the Resource Manager Terraform host. We will use terraform to prepare the inventory and generate a Bastion session to allow connection to the VM.
Ansible will be executed using the terraform local-exec provisionner from a null resource.
The playbook will start by deploying K3S on the machine. Then the first master will be used to deploy applications in the previously generated kubernetes cluster.
In today's example, we have set up a HA k3s cluster on Oracle Cloud and deploy an application using Terraform and Ansible.
To learn more about the deployed resource, see the following resource: