+```
+
+![Awesome Rancher screenshot](./rancher-clusters-imported.png?raw=true "Clusters imported")
+
## Terraform module documentation
+
### Requirements
| Name | Version |
|------|---------|
-| [terraform](#requirement\_terraform) | >= 1.3 |
+| [terraform](#requirement\_terraform) | >= 1.9 |
| [equinix](#requirement\_equinix) | >= 1.14.2 |
### Providers
| Name | Version |
|------|---------|
-| [equinix](#provider\_equinix) | >= 1.14.2 |
+| [equinix](#provider\_equinix) | 1.14.3 |
### Modules
| Name | Source | Version |
|------|--------|---------|
-| [k3s\_cluster](#module\_k3s\_cluster) | ./modules/k3s_cluster | n/a |
+| [kube\_cluster](#module\_kube\_cluster) | ./modules/kube_cluster | n/a |
### Resources
@@ -353,18 +514,19 @@ sv-k3s-aio Ready control-plane,master 9m20s v1.26.5+k3s1
| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
-| [metal\_project\_id](#input\_metal\_project\_id) | Equinix Metal Project ID | `string` | n/a | yes |
-| [clusters](#input\_clusters) | K3s cluster definition | list(object({
name = optional(string, "K3s demo cluster")
metro = optional(string, "FR")
plan_control_plane = optional(string, "c3.small.x86")
plan_node = optional(string, "c3.small.x86")
node_count = optional(number, 0)
k3s_ha = optional(bool, false)
os = optional(string, "debian_11")
control_plane_hostnames = optional(string, "k3s-cp")
node_hostnames = optional(string, "k3s-node")
custom_k3s_token = optional(string, "")
ip_pool_count = optional(number, 0)
k3s_version = optional(string, "")
metallb_version = optional(string, "")
}))
| [
{}
]
| no |
+| [clusters](#input\_clusters) | Cluster definition | list(object({
name = optional(string, "Demo cluster")
metro = optional(string, "FR")
plan_control_plane = optional(string, "c3.small.x86")
plan_node = optional(string, "c3.small.x86")
node_count = optional(number, 0)
ha = optional(bool, false)
os = optional(string, "debian_11")
control_plane_hostnames = optional(string, "cp")
node_hostnames = optional(string, "node")
custom_token = optional(string, "")
ip_pool_count = optional(number, 0)
kube_version = optional(string, "")
metallb_version = optional(string, "")
rancher_flavor = optional(string, "")
rancher_version = optional(string, "")
custom_rancher_password = optional(string, "")
}))
| [
{}
]
| no |
| [deploy\_demo](#input\_deploy\_demo) | Deploys a simple demo using a global IP as ingress and a hello-kubernetes pods | `bool` | `false` | no |
| [global\_ip](#input\_global\_ip) | Enables a global anycast IPv4 that will be shared for all clusters in all metros | `bool` | `false` | no |
+| [metal\_project\_id](#input\_metal\_project\_id) | Equinix Metal Project ID | `string` | n/a | yes |
### Outputs
| Name | Description |
|------|-------------|
| [anycast\_ip](#output\_anycast\_ip) | Global IP shared across Metros |
+| [cluster\_details](#output\_cluster\_details) | List of Clusters => K8s details |
| [demo\_url](#output\_demo\_url) | URL of the demo application to demonstrate a global IP shared across Metros |
-| [k3s\_api](#output\_k3s\_api) | List of Clusters => K3s APIs |
+| [rancher\_urls](#output\_rancher\_urls) | List of Clusters => Rancher details |
## Contributing
diff --git a/examples/demo_cluster/README.md b/examples/demo_cluster/README.md
index 6b5f6ec..3040d1d 100644
--- a/examples/demo_cluster/README.md
+++ b/examples/demo_cluster/README.md
@@ -1,6 +1,6 @@
-# SiDemo Cluster Example
+# Demo Cluster Examples
-This example demonstrates usage of the Equinix Metal K3s module. A Demo application is installed.
+This example demonstrates usage of the Equinix Metal K3s/RKE2 module. A Demo application is installed.
## Usage
@@ -36,15 +36,15 @@ No resources.
| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
-| [metal\_auth\_token](#input\_metal\_auth\_token) | Your Equinix Metal API key | `string` | n/a | yes |
-| [metal\_project\_id](#input\_metal\_project\_id) | Your Equinix Metal Project ID | `string` | n/a | yes |
-| [clusters](#input\_clusters) | K3s cluster definition | list(object({
name = optional(string, "K3s demo cluster")
metro = optional(string, "FR")
plan_control_plane = optional(string, "c3.small.x86")
plan_node = optional(string, "c3.small.x86")
node_count = optional(number, 0)
k3s_ha = optional(bool, false)
os = optional(string, "debian_11")
control_plane_hostnames = optional(string, "k3s-cp")
node_hostnames = optional(string, "k3s-node")
custom_k3s_token = optional(string, "")
ip_pool_count = optional(number, 0)
k3s_version = optional(string, "")
metallb_version = optional(string, "")
}))
| [
{}
]
| no |
+| [clusters](#input\_clusters) | Cluster definition | list(object({
name = optional(string, "Demo cluster")
metro = optional(string, "FR")
plan_control_plane = optional(string, "c3.small.x86")
plan_node = optional(string, "c3.small.x86")
node_count = optional(number, 0)
ha = optional(bool, false)
os = optional(string, "debian_11")
control_plane_hostnames = optional(string, "cp")
node_hostnames = optional(string, "node")
custom_token = optional(string, "")
ip_pool_count = optional(number, 0)
kube_version = optional(string, "")
metallb_version = optional(string, "")
rancher_version = optional(string, "")
rancher_flavor = optional(string, "")
custom_rancher_password = optional(string, "")
}))
| [
{}
]
| no |
| [deploy\_demo](#input\_deploy\_demo) | Deploys a simple demo using a global IP as ingress and a hello-kubernetes pods | `bool` | `false` | no |
| [global\_ip](#input\_global\_ip) | Enables a global anycast IPv4 that will be shared for all clusters in all metros | `bool` | `false` | no |
+| [metal\_auth\_token](#input\_metal\_auth\_token) | Your Equinix Metal API key | `string` | n/a | yes |
+| [metal\_project\_id](#input\_metal\_project\_id) | Your Equinix Metal Project ID | `string` | n/a | yes |
### Outputs
| Name | Description |
|------|-------------|
-| [demo\_cluster](#output\_demo\_cluster) | Passthrough of the root module output |
+| [clusters\_output](#output\_clusters\_output) | Passthrough of the root module output |
diff --git a/examples/demo_cluster/clusters-to-rancher.sh b/examples/demo_cluster/clusters-to-rancher.sh
new file mode 100755
index 0000000..16bc9fb
--- /dev/null
+++ b/examples/demo_cluster/clusters-to-rancher.sh
@@ -0,0 +1,118 @@
+#!/usr/bin/env bash
+set -euo pipefail
+
+usage() {
+ echo "Usage: $0 -p "
+ exit 1
+}
+
+die() {
+ echo ${1} 1>&2
+ exit ${2}
+}
+
+prechecks() {
+ command -v kubectl >/dev/null 2>&1 || die "Error: kubectl not found" 1
+ command -v curl >/dev/null 2>&1 || die "Error: curl not found" 1
+ command -v jq >/dev/null 2>&1 || die "Error: jq not found" 1
+ command -v scp >/dev/null 2>&1 || die "Error: scp not found" 1
+}
+
+wait_for_rancher() {
+ while ! curl -k "${RANCHERURL}/ping" >/dev/null 2>&1; do sleep 1; done
+}
+
+bootstrap_rancher() {
+ # Get token
+ TOKEN=$(curl -sk -X POST ${RANCHERURL}/v3-public/localProviders/local?action=login -H 'content-type: application/json' -d "{\"username\":\"admin\",\"password\":\"${RANCHERPASS}\"}" | jq -r .token)
+
+ # Set password
+ curl -q -sk ${RANCHERURL}/v3/users?action=changepassword -H 'content-type: application/json' -H "Authorization: Bearer ${TOKEN}" -d "{\"currentPassword\":\"${RANCHERPASS}\",\"newPassword\":\"${PASSWORD}\"}"
+
+ # Create a temporary API token (ttl=60 minutes)
+ APITOKEN=$(curl -sk ${RANCHERURL}/v3/token -H 'content-type: application/json' -H "Authorization: Bearer ${TOKEN}" -d '{"type":"token","description":"automation","ttl":3600000}' | jq -r .token)
+
+ # Set the Rancher URL
+ curl -q -sk ${RANCHERURL}/v3/settings/server-url -H 'content-type: application/json' -H "Authorization: Bearer ${APITOKEN}" -X PUT -d "{\"name\":\"server-url\",\"value\":\"${RANCHERURL}\"}"
+}
+
+get_cluster_kubeconfig() {
+ cluster="${1}"
+ FIRSTHOST=$(echo ${OUTPUT} | jq -r "first(.clusters_output.value.cluster_details[\"${cluster}\"].nodes[].node_public_ipv4)")
+ API=$(echo ${OUTPUT} | jq -r ".clusters_output.value.cluster_details[\"${cluster}\"].api")
+ KUBECONFIG="$(mktemp)"
+ export KUBECONFIG
+ scp -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no root@${FIRSTHOST}:/root/.kube/config ${KUBECONFIG}
+ # Linux
+ [ "$(uname -o)" == "GNU/Linux" ] && sed -i "s/127.0.0.1/${API}/g" ${KUBECONFIG}
+ # OSX
+ [ "$(uname -o)" == "Darwin" ] && sed -i "" "s/127.0.0.1/${API}/g" ${KUBECONFIG}
+ chmod 600 ${KUBECONFIG}
+ echo ${KUBECONFIG}
+}
+
+clusters_to_rancher() {
+ RANCHERKUBE=$(get_cluster_kubeconfig "${RANCHERCLUSTER}")
+
+ IFS=$'\n'
+ for clustername in ${OTHERCLUSTERS}; do
+ export KUBECONFIG=${RANCHERKUBE}
+ normalizedname=$(echo ${clustername} | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9 ]/-/g' | sed 's/ /-/g' | sed 's/^-*\|-*$/''/g')
+ cat <<-EOF | kubectl apply -f - >/dev/null 2>&1
+ apiVersion: provisioning.cattle.io/v1
+ kind: Cluster
+ metadata:
+ name: ${normalizedname}
+ namespace: fleet-default
+ spec: {}
+ EOF
+ MANIFEST="$(kubectl get clusterregistrationtokens.management.cattle.io -n "$(kubectl get clusters.provisioning.cattle.io -n fleet-default "${normalizedname}" -o jsonpath='{.status.clusterName}')" default-token -o jsonpath='{.status.manifestUrl}')"
+ DESTKUBECONFIG=$(get_cluster_kubeconfig "${clustername}")
+ curl --insecure -sfL ${MANIFEST} | kubectl --kubeconfig ${DESTKUBECONFIG} apply -f - >/dev/null 2>&1
+ rm -f "${DESTKUBECONFIG}"
+ done
+
+ rm -f "${RANCHERKUBE}"
+}
+
+PASSWORD=""
+while getopts ":p:" opt; do
+ case $opt in
+ p)
+ PASSWORD=$OPTARG
+ ;;
+ \?)
+ echo "Invalid option: -$OPTARG" >&2
+ usage
+ ;;
+ :)
+ echo "Option -$OPTARG requires an argument." >&2
+ usage
+ ;;
+ esac
+done
+
+if [ -z "$PASSWORD" ]; then
+ echo "Error: Password is required." 1>&2
+ usage
+fi
+
+if [ ${#PASSWORD} -lt 12 ]; then
+ die "Error: Password must be at least 12 characters long." 1
+fi
+
+[ ! -f "./terraform.tfstate" ] && die "Error: ./terraform.tfstate does not exist." 1
+
+OUTPUT=$(terraform output -json)
+
+[ "${OUTPUT}" == "{}" ] && die "Error. terraform output is '{}'." 1
+
+RANCHERCLUSTER=$(echo ${OUTPUT} | jq -r 'first(.clusters_output.value.rancher_urls | keys[])')
+RANCHERURL=$(echo ${OUTPUT} | jq -r ".clusters_output.value.rancher_urls[\"${RANCHERCLUSTER}\"].rancher_url")
+RANCHERPASS=$(echo ${OUTPUT} | jq -r ".clusters_output.value.rancher_urls[\"${RANCHERCLUSTER}\"].rancher_initial_password_base64" | base64 -d)
+OTHERCLUSTERS=$(echo ${OUTPUT} | jq -r ".clusters_output.value.cluster_details | keys[] | select(. != \"${RANCHERCLUSTER}\")")
+
+prechecks
+wait_for_rancher
+bootstrap_rancher
+clusters_to_rancher
diff --git a/examples/demo_cluster/outputs.tf b/examples/demo_cluster/outputs.tf
index bcf7ada..8432eb1 100644
--- a/examples/demo_cluster/outputs.tf
+++ b/examples/demo_cluster/outputs.tf
@@ -1,4 +1,4 @@
-output "demo_cluster" {
+output "clusters_output" {
description = "Passthrough of the root module output"
value = module.demo
}
diff --git a/examples/demo_cluster/terraform.tfvars.example b/examples/demo_cluster/terraform.tfvars.example
index 4c7f268..a374c00 100644
--- a/examples/demo_cluster/terraform.tfvars.example
+++ b/examples/demo_cluster/terraform.tfvars.example
@@ -1,11 +1,26 @@
metal_auth_token="your_token_here" #This must be a user API token
metal_project_id="your_project_id"
-clusters = [
+clusters = [
{
- name = "Your cluster name"
+ name = "FR DEV Cluster"
+ rancher_flavor = "stable"
+ ip_pool_count = 1
+ kube_version = "v1.29.9+k3s1"
},
{
- name = "Your cluster name"
- metro = "SV"
+ name = "SV DEV Cluster"
+ metro = "SV"
+ node_count = 1
+ kube_version = "v1.30.3+rke2r1"
+ },
+ {
+ name = "SV Production"
+ ip_pool_count = 4
+ ha = true
+ metro = "SV"
+ node_count = 3
}
]
+
+global_ip = true
+deploy_demo = true
diff --git a/examples/demo_cluster/variables.tf b/examples/demo_cluster/variables.tf
index 527256d..4bbddb0 100644
--- a/examples/demo_cluster/variables.tf
+++ b/examples/demo_cluster/variables.tf
@@ -22,21 +22,24 @@ variable "deploy_demo" {
}
variable "clusters" {
- description = "K3s cluster definition"
+ description = "Cluster definition"
type = list(object({
- name = optional(string, "K3s demo cluster")
+ name = optional(string, "Demo cluster")
metro = optional(string, "FR")
plan_control_plane = optional(string, "c3.small.x86")
plan_node = optional(string, "c3.small.x86")
node_count = optional(number, 0)
- k3s_ha = optional(bool, false)
+ ha = optional(bool, false)
os = optional(string, "debian_11")
- control_plane_hostnames = optional(string, "k3s-cp")
- node_hostnames = optional(string, "k3s-node")
- custom_k3s_token = optional(string, "")
+ control_plane_hostnames = optional(string, "cp")
+ node_hostnames = optional(string, "node")
+ custom_token = optional(string, "")
ip_pool_count = optional(number, 0)
- k3s_version = optional(string, "")
+ kube_version = optional(string, "")
metallb_version = optional(string, "")
+ rancher_version = optional(string, "")
+ rancher_flavor = optional(string, "")
+ custom_rancher_password = optional(string, "")
}))
default = [{}]
}
diff --git a/main.tf b/main.tf
index d64c0e7..49ad487 100644
--- a/main.tf
+++ b/main.tf
@@ -1,15 +1,13 @@
locals {
global_ip_cidr = var.global_ip ? equinix_metal_reserved_ip_block.global_ip[0].cidr_notation : ""
- # tflint-ignore: terraform_unused_declarations
- validate_demo = (var.deploy_demo == true && var.global_ip == false) ? tobool("Demo is only deployed if global_ip = true.") : true
}
################################################################################
-# K3S Cluster In-line Module
+# K8s Cluster In-line Module
################################################################################
-module "k3s_cluster" {
- source = "./modules/k3s_cluster"
+module "kube_cluster" {
+ source = "./modules/kube_cluster"
for_each = { for cluster in var.clusters : cluster.name => cluster }
@@ -18,14 +16,17 @@ module "k3s_cluster" {
plan_control_plane = each.value.plan_control_plane
plan_node = each.value.plan_node
node_count = each.value.node_count
- k3s_ha = each.value.k3s_ha
+ ha = each.value.ha
os = each.value.os
control_plane_hostnames = each.value.control_plane_hostnames
node_hostnames = each.value.node_hostnames
- custom_k3s_token = each.value.custom_k3s_token
- k3s_version = each.value.k3s_version
+ custom_token = each.value.custom_token
+ kube_version = each.value.kube_version
metallb_version = each.value.metallb_version
ip_pool_count = each.value.ip_pool_count
+ rancher_flavor = each.value.rancher_flavor
+ rancher_version = each.value.rancher_version
+ custom_rancher_password = each.value.custom_rancher_password
metal_project_id = var.metal_project_id
deploy_demo = var.deploy_demo
global_ip_cidr = local.global_ip_cidr
diff --git a/modules/k3s_cluster/outputs.tf b/modules/k3s_cluster/outputs.tf
deleted file mode 100644
index 7e6cb62..0000000
--- a/modules/k3s_cluster/outputs.tf
+++ /dev/null
@@ -1,4 +0,0 @@
-output "k3s_api_ip" {
- value = try(equinix_metal_reserved_ip_block.api_vip_addr[0].address, equinix_metal_device.all_in_one[0].network[0].address)
- description = "K3s API IPs"
-}
diff --git a/modules/k3s_cluster/templates/user-data.tftpl b/modules/k3s_cluster/templates/user-data.tftpl
deleted file mode 100644
index 0fb1ff4..0000000
--- a/modules/k3s_cluster/templates/user-data.tftpl
+++ /dev/null
@@ -1,388 +0,0 @@
-#!/usr/bin/env bash
-set -euo pipefail
-
-wait_for_k3s_api(){
- # Wait for the node to be available, meaning the K8s API is available
- while ! kubectl wait --for condition=ready node $(cat /etc/hostname | tr '[:upper:]' '[:lower:]') --timeout=60s; do sleep 2 ; done
-}
-
-install_bird(){
- # Install bird
- apt update && apt install bird jq -y
-
- # In order to configure bird, the metadata information is required.
- # BGP info can take a few seconds to be populated, retry if that's the case
- INTERNAL_IP="null"
- while [ $${INTERNAL_IP} == "null" ]; do
- echo "BGP data still not available..."
- sleep 5
- METADATA=$(curl -s https://metadata.platformequinix.com/metadata)
- INTERNAL_IP=$(echo $${METADATA} | jq -r '.bgp_neighbors[0].customer_ip')
- done
- PEER_IP_1=$(echo $${METADATA} | jq -r '.bgp_neighbors[0].peer_ips[0]')
- PEER_IP_2=$(echo $${METADATA} | jq -r '.bgp_neighbors[0].peer_ips[1]')
- ASN=$(echo $${METADATA} | jq -r '.bgp_neighbors[0].customer_as')
- ASN_AS=$(echo $${METADATA} | jq -r '.bgp_neighbors[0].peer_as')
- MULTIHOP=$(echo $${METADATA} | jq -r '.bgp_neighbors[0].multihop')
- GATEWAY=$(echo $${METADATA} | jq -r '.network.addresses[] | select(.public == true and .address_family == 4) | .gateway')
-
- # Generate the bird configuration based on the metadata values
- # https://deploy.equinix.com/developers/guides/configuring-bgp-with-bird/
- cat <<-EOF >/etc/bird/bird.conf
- router id $${INTERNAL_IP};
-
- protocol direct {
- interface "lo";
- }
-
- protocol kernel {
- persist;
- scan time 60;
- import all;
- export all;
- }
-
- protocol device {
- scan time 60;
- }
-
- protocol static {
- route $${PEER_IP_1}/32 via $${GATEWAY};
- route $${PEER_IP_2}/32 via $${GATEWAY};
- }
-
- filter metal_bgp {
- accept;
- }
-
- protocol bgp neighbor_v4_1 {
- export filter metal_bgp;
- local as $${ASN};
- multihop;
- neighbor $${PEER_IP_1} as $${ASN_AS};
- }
-
- protocol bgp neighbor_v4_2 {
- export filter metal_bgp;
- local as $${ASN};
- multihop;
- neighbor $${PEER_IP_2} as $${ASN_AS};
- }
- EOF
-
- # Wait for K3s to be up, otherwise the second and third control plane nodes will try to join localhost
- wait_for_k3s_api
-
- # Configure the BGP interface
- # https://deploy.equinix.com/developers/guides/configuring-bgp-with-bird/
- if ! grep -q 'lo:0' /etc/network/interfaces; then
- cat <<-EOF >>/etc/network/interfaces
-
- auto lo:0
- iface lo:0 inet static
- address ${API_IP}
- netmask 255.255.255.255
- EOF
- ifup lo:0
- fi
-
- # Enable IP forward for bird
- # TODO: Check if this is done automatically with K3s, it doesn't hurt however
- echo "net.ipv4.ip_forward=1" | tee /etc/sysctl.d/99-ip-forward.conf
- sysctl --load /etc/sysctl.d/99-ip-forward.conf
-
- # Debian usually starts the service after being installed, but just in case
- systemctl enable bird
- systemctl restart bird
-}
-
-install_metallb(){
- apt update && apt install -y curl jq
-
-%{ if metallb_version != "" ~}
- export METALLB_VERSION=${metallb_version}
-%{ else ~}
- export METALLB_VERSION=$(curl --silent "https://api.github.com/repos/metallb/metallb/releases/latest" | jq -r .tag_name)
-%{ endif ~}
-
- # Wait for K3s to be up. It should be up already but just in case.
- wait_for_k3s_api
-
- # Apply the MetalLB manifest
- kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/$${METALLB_VERSION}/config/manifests/metallb-native.yaml
-
- # Wait for MetalLB to be up
- while ! kubectl wait --for condition=ready -n metallb-system $(kubectl get pods -n metallb-system -l component=controller -o name) --timeout=10s; do sleep 2 ; done
-
- # In order to configure MetalLB, the metadata information is required.
- # BGP info can take a few seconds to be populated, retry if that's the case
- INTERNAL_IP="null"
- while [ $${INTERNAL_IP} == "null" ]; do
- echo "BGP data still not available..."
- sleep 5
- METADATA=$(curl -s https://metadata.platformequinix.com/metadata)
- INTERNAL_IP=$(echo $${METADATA} | jq -r '.bgp_neighbors[0].customer_ip')
- done
- PEER_IP_1=$(echo $${METADATA} | jq -r '.bgp_neighbors[0].peer_ips[0]')
- PEER_IP_2=$(echo $${METADATA} | jq -r '.bgp_neighbors[0].peer_ips[1]')
- ASN=$(echo $${METADATA} | jq -r '.bgp_neighbors[0].customer_as')
- ASN_AS=$(echo $${METADATA} | jq -r '.bgp_neighbors[0].peer_as')
-
-%{ if global_ip_cidr != "" ~}
- # Configure the IPAddressPool for the Global IP if present
- cat <<- EOF | kubectl apply -f -
- apiVersion: metallb.io/v1beta1
- kind: IPAddressPool
- metadata:
- name: anycast-ip
- namespace: metallb-system
- spec:
- addresses:
- - ${global_ip_cidr}
- autoAssign: false
- EOF
-%{ endif ~}
-
-%{ if ip_pool != "" ~}
- # Configure the IPAddressPool for the IP pool if present
- cat <<- EOF | kubectl apply -f -
- apiVersion: metallb.io/v1beta1
- kind: IPAddressPool
- metadata:
- name: ippool
- namespace: metallb-system
- spec:
- addresses:
- - ${ip_pool}
- autoAssign: false
- EOF
-%{ endif ~}
-
- # Configure the BGPPeer for each peer IP
- cat <<- EOF | kubectl apply -f -
- apiVersion: metallb.io/v1beta2
- kind: BGPPeer
- metadata:
- name: equinix-metal-peer-1
- namespace: metallb-system
- spec:
- peerASN: $${ASN_AS}
- myASN: $${ASN}
- peerAddress: $${PEER_IP_1}
- sourceAddress: $${INTERNAL_IP}
- EOF
-
- cat <<- EOF | kubectl apply -f -
- apiVersion: metallb.io/v1beta2
- kind: BGPPeer
- metadata:
- name: equinix-metal-peer-1
- namespace: metallb-system
- spec:
- peerASN: $${ASN_AS}
- myASN: $${ASN}
- peerAddress: $${PEER_IP_2}
- sourceAddress: $${INTERNAL_IP}
- EOF
-
- # Enable the BGPAdvertisement, only to be executed in the control-plane nodes
- cat <<- EOF | kubectl apply -f -
- apiVersion: metallb.io/v1beta1
- kind: BGPAdvertisement
- metadata:
- name: bgp-peers
- namespace: metallb-system
- spec:
- nodeSelectors:
- - matchLabels:
- node-role.kubernetes.io/control-plane: "true"
- EOF
-}
-
-install_k3s(){
- # Curl is needed to download the k3s binary
- # Jq is needed to parse the Equinix Metal metadata (json format)
- apt update && apt install curl jq -y
-
- # Download the K3s installer script
- curl -L --output k3s_installer.sh https://get.k3s.io && install -m755 k3s_installer.sh /usr/local/bin/
-
-%{ if node_type == "control-plane" ~}
- # If the node to be installed is the second or third control plane or extra nodes, wait for the API to be up
- # Wait for the first control plane node to be up
- while ! curl -m 10 -s -k -o /dev/null https://${API_IP}:6443 ; do echo "API still not reachable"; sleep 2 ; done
-%{ endif ~}
-%{ if node_type == "node" ~}
- # Wait for the first control plane node to be up
- while ! curl -m 10 -s -k -o /dev/null https://${API_IP}:6443 ; do echo "API still not reachable"; sleep 2 ; done
-%{ endif ~}
-
- export INSTALL_K3S_SKIP_START=false
- export K3S_TOKEN="${k3s_token}"
- export NODE_IP=$(curl -s https://metadata.platformequinix.com/metadata | jq -r '.network.addresses[] | select(.public == false and .address_family == 4) |.address')
- export NODE_EXTERNAL_IP=$(curl -s https://metadata.platformequinix.com/metadata | jq -r '.network.addresses[] | select(.public == true and .address_family == 4) |.address')
-%{ if node_type == "all-in-one" ~}
-%{ if global_ip_cidr != "" ~}
- export INSTALL_K3S_EXEC="server --write-kubeconfig-mode=644 --disable=servicelb --node-ip $${NODE_IP} --node-external-ip $${NODE_EXTERNAL_IP}"
-%{ else ~}
-%{ if ip_pool != "" ~}
- export INSTALL_K3S_EXEC="server --write-kubeconfig-mode=644 --disable=servicelb --node-ip $${NODE_IP} --node-external-ip $${NODE_EXTERNAL_IP}"
-%{ else ~}
- export INSTALL_K3S_EXEC="server --write-kubeconfig-mode=644 --node-ip $${NODE_IP} --node-external-ip $${NODE_EXTERNAL_IP}"
-%{ endif ~}
-%{ endif ~}
-%{ endif ~}
-%{ if node_type == "control-plane-master" ~}
- export INSTALL_K3S_EXEC="server --cluster-init --write-kubeconfig-mode=644 --tls-san=${API_IP} --tls-san=${API_IP}.sslip.io --disable=servicelb --node-ip $${NODE_IP} --node-external-ip $${NODE_EXTERNAL_IP}"
-%{ endif ~}
-%{ if node_type == "control-plane" ~}
- export INSTALL_K3S_EXEC="server --server https://${API_IP}:6443 --write-kubeconfig-mode=644 --node-ip $${NODE_IP} --node-external-ip $${NODE_EXTERNAL_IP}"
-%{ endif ~}
-%{ if node_type == "node" ~}
- export INSTALL_K3S_EXEC="agent --server https://${API_IP}:6443 --node-ip $${NODE_IP} --node-external-ip $${NODE_EXTERNAL_IP}"
-%{ endif ~}
-%{ if k3s_version != "" ~}
- export INSTALL_K3S_VERSION=${k3s_version}
-%{ endif ~}
- /usr/local/bin/k3s_installer.sh
-
- systemctl enable --now k3s
-}
-
-deploy_demo(){
- kubectl annotate svc -n kube-system traefik "metallb.universe.tf/address-pool=anycast-ip"
-
- # I cannot make split work in Terraform templates
- IP=$(echo ${global_ip_cidr} | cut -d/ -f1)
- cat <<- EOF | kubectl apply -f -
- ---
- apiVersion: v1
- kind: Namespace
- metadata:
- name: hello-kubernetes
- ---
- apiVersion: v1
- kind: ServiceAccount
- metadata:
- name: hello-kubernetes
- namespace: hello-kubernetes
- labels:
- app.kubernetes.io/name: hello-kubernetes
- ---
- apiVersion: v1
- kind: Service
- metadata:
- name: hello-kubernetes
- namespace: hello-kubernetes
- labels:
- app.kubernetes.io/name: hello-kubernetes
- spec:
- type: ClusterIP
- ports:
- - port: 80
- targetPort: http
- protocol: TCP
- name: http
- selector:
- app.kubernetes.io/name: hello-kubernetes
- ---
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- name: hello-kubernetes
- namespace: hello-kubernetes
- labels:
- app.kubernetes.io/name: hello-kubernetes
- spec:
- replicas: 2
- selector:
- matchLabels:
- app.kubernetes.io/name: hello-kubernetes
- template:
- metadata:
- labels:
- app.kubernetes.io/name: hello-kubernetes
- spec:
- serviceAccountName: hello-kubernetes
- containers:
- - name: hello-kubernetes
- image: "paulbouwer/hello-kubernetes:1.10"
- imagePullPolicy: IfNotPresent
- ports:
- - name: http
- containerPort: 8080
- protocol: TCP
- livenessProbe:
- httpGet:
- path: /
- port: http
- readinessProbe:
- httpGet:
- path: /
- port: http
- env:
- - name: HANDLER_PATH_PREFIX
- value: ""
- - name: RENDER_PATH_PREFIX
- value: ""
- - name: KUBERNETES_NAMESPACE
- valueFrom:
- fieldRef:
- fieldPath: metadata.namespace
- - name: KUBERNETES_POD_NAME
- valueFrom:
- fieldRef:
- fieldPath: metadata.name
- - name: KUBERNETES_NODE_NAME
- valueFrom:
- fieldRef:
- fieldPath: spec.nodeName
- - name: CONTAINER_IMAGE
- value: "paulbouwer/hello-kubernetes:1.10"
- ---
- apiVersion: networking.k8s.io/v1
- kind: Ingress
- metadata:
- name: hello-kubernetes-ingress
- namespace: hello-kubernetes
- spec:
- rules:
- - host: hellok3s.$${IP}.sslip.io
- http:
- paths:
- - path: "/"
- pathType: Prefix
- backend:
- service:
- name: hello-kubernetes
- port:
- name: http
- EOF
-}
-
-install_k3s
-
-%{ if node_type == "control-plane-master" ~}
-install_bird
-install_metallb
-%{ endif ~}
-%{ if node_type == "control-plane" ~}
-install_bird
-install_metallb
-%{ endif ~}
-
-%{ if node_type == "all-in-one" ~}
-%{ if global_ip_cidr != "" ~}
-INSTALL_METALLB=true
-%{ else }
-%{ if ip_pool != "" ~}
-INSTALL_METALLB=true
-%{ else }
-INSTALL_METALLB=false
-%{ endif ~}
-%{ endif ~}
-[ $${INSTALL_METALLB} == true ] && install_metallb || true
-%{ endif ~}
-%{ if deploy_demo != "" ~}
-deploy_demo
-%{ endif ~}
diff --git a/modules/k3s_cluster/README.md b/modules/kube_cluster/README.md
similarity index 62%
rename from modules/k3s_cluster/README.md
rename to modules/kube_cluster/README.md
index b40cf58..444cb79 100644
--- a/modules/k3s_cluster/README.md
+++ b/modules/kube_cluster/README.md
@@ -1,6 +1,6 @@
-# K3S Cluster In-line Module
+# K3s/RKE2 Cluster In-line Module
-This in-line module deploys the K3S cluster.
+This in-line module deploys the K3s/RKE2 cluster.
## Notes
@@ -10,10 +10,6 @@ This in-line module deploys the K3S cluster.
See [this](https://discuss.hashicorp.com/t/invalid-value-for-vars-parameter-vars-map-does-not-contain-key-issue/12074/4) and [this](https://github.com/hashicorp/terraform/issues/23384) for more information.
-* The loopback interface for API LB cannot be up until K3s is fully installed in the extra control plane nodes
-
- Otherwise they will try to join themselves... that's why there is a curl to the K3s API that waits for the first master to be up before trying to install K3s and also why the bird configuration happens after K3s is up and running in the other nodes.
-
* ServiceLB disabled
`--disable servicelb` is required for metallb to work
@@ -30,8 +26,8 @@ This in-line module deploys the K3S cluster.
| Name | Version |
|------|---------|
-| [equinix](#provider\_equinix) | >= 1.14.2 |
-| [random](#provider\_random) | >= 3.5.1 |
+| [equinix](#provider\_equinix) | 2.5.0 |
+| [random](#provider\_random) | 3.6.3 |
### Modules
@@ -50,33 +46,43 @@ No modules.
| [equinix_metal_device.control_plane_others](https://registry.terraform.io/providers/equinix/equinix/latest/docs/resources/metal_device) | resource |
| [equinix_metal_device.nodes](https://registry.terraform.io/providers/equinix/equinix/latest/docs/resources/metal_device) | resource |
| [equinix_metal_reserved_ip_block.api_vip_addr](https://registry.terraform.io/providers/equinix/equinix/latest/docs/resources/metal_reserved_ip_block) | resource |
+| [equinix_metal_reserved_ip_block.ingress_addr](https://registry.terraform.io/providers/equinix/equinix/latest/docs/resources/metal_reserved_ip_block) | resource |
| [equinix_metal_reserved_ip_block.ip_pool](https://registry.terraform.io/providers/equinix/equinix/latest/docs/resources/metal_reserved_ip_block) | resource |
-| [random_string.random_k3s_token](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/string) | resource |
+| [random_string.random_password](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/string) | resource |
+| [random_string.random_token](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/string) | resource |
### Inputs
| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
-| [metal\_metro](#input\_metal\_metro) | Equinix Metal Metro | `string` | n/a | yes |
-| [metal\_project\_id](#input\_metal\_project\_id) | Equinix Metal Project ID | `string` | n/a | yes |
-| [cluster\_name](#input\_cluster\_name) | Cluster name | `string` | `"K3s cluster"` | no |
+| [cluster\_name](#input\_cluster\_name) | Cluster name | `string` | `"Cluster"` | no |
| [control\_plane\_hostnames](#input\_control\_plane\_hostnames) | Control plane hostname prefix | `string` | `"cp"` | no |
-| [custom\_k3s\_token](#input\_custom\_k3s\_token) | K3s token used for nodes to join the cluster (autogenerated otherwise) | `string` | `null` | no |
+| [custom\_rancher\_password](#input\_custom\_rancher\_password) | Rancher initial password (autogenerated if not provided) | `string` | `null` | no |
+| [custom\_token](#input\_custom\_token) | Token used for nodes to join the cluster (autogenerated otherwise) | `string` | `null` | no |
| [deploy\_demo](#input\_deploy\_demo) | Deploys a simple demo using a global IP as ingress and a hello-kubernetes pods | `bool` | `false` | no |
| [global\_ip\_cidr](#input\_global\_ip\_cidr) | Global Anycast IP that will be mapped on all metros via BGP | `string` | `null` | no |
-| [ip\_pool\_count](#input\_ip\_pool\_count) | Number of public IPv4 per metro to be used as LoadBalancers with MetalLB | `number` | `0` | no |
-| [k3s\_ha](#input\_k3s\_ha) | K3s HA (aka 3 control plane nodes) | `bool` | `false` | no |
-| [k3s\_version](#input\_k3s\_version) | K3s version to be installed. Empty for latest | `string` | `""` | no |
+| [ha](#input\_ha) | HA (aka 3 control plane nodes) | `bool` | `false` | no |
+| [ip\_pool\_count](#input\_ip\_pool\_count) | Number of public IPv4 per metro to be used as LoadBalancers with MetalLB (it needs to be power of 2 between 0 and 256 as required by Equinix Metal) | `number` | `0` | no |
+| [kube\_version](#input\_kube\_version) | K3s/RKE2 version to be installed. Empty for latest K3s | `string` | `""` | no |
+| [metal\_metro](#input\_metal\_metro) | Equinix Metal Metro | `string` | n/a | yes |
+| [metal\_project\_id](#input\_metal\_project\_id) | Equinix Metal Project ID | `string` | n/a | yes |
| [metallb\_version](#input\_metallb\_version) | MetalLB version to be installed. Empty for latest | `string` | `""` | no |
-| [node\_count](#input\_node\_count) | Number of K3s nodes | `number` | `"0"` | no |
+| [node\_count](#input\_node\_count) | Number of nodes | `number` | `"0"` | no |
| [node\_hostnames](#input\_node\_hostnames) | Node hostname prefix | `string` | `"node"` | no |
| [os](#input\_os) | Operating system | `string` | `"debian_11"` | no |
-| [plan\_control\_plane](#input\_plan\_control\_plane) | K3s control plane type/size | `string` | `"c3.small.x86"` | no |
-| [plan\_node](#input\_plan\_node) | K3s node type/size | `string` | `"c3.small.x86"` | no |
+| [plan\_control\_plane](#input\_plan\_control\_plane) | Control plane type/size | `string` | `"c3.small.x86"` | no |
+| [plan\_node](#input\_plan\_node) | Node type/size | `string` | `"c3.small.x86"` | no |
+| [rancher\_flavor](#input\_rancher\_flavor) | Rancher flavor to be installed (prime, latest, stable or alpha). Empty to not install it | `string` | `""` | no |
+| [rancher\_version](#input\_rancher\_version) | Rancher version to be installed (vX.Y.Z). Empty for latest | `string` | `""` | no |
### Outputs
| Name | Description |
|------|-------------|
-| [k3s\_api\_ip](#output\_k3s\_api\_ip) | K3s API IPs |
+| [ingress\_ip](#output\_ingress\_ip) | Ingress IP |
+| [ip\_pool\_cidr](#output\_ip\_pool\_cidr) | IP Pool for LoadBalancer SVCs |
+| [kube\_api\_ip](#output\_kube\_api\_ip) | K8s API IPs |
+| [nodes\_details](#output\_nodes\_details) | Nodes external and internal IPs |
+| [rancher\_address](#output\_rancher\_address) | Rancher URL |
+| [rancher\_password](#output\_rancher\_password) | Rancher initial password |
diff --git a/modules/k3s_cluster/main.tf b/modules/kube_cluster/main.tf
similarity index 68%
rename from modules/k3s_cluster/main.tf
rename to modules/kube_cluster/main.tf
index b9906fa..15119b8 100644
--- a/modules/k3s_cluster/main.tf
+++ b/modules/kube_cluster/main.tf
@@ -1,10 +1,17 @@
locals {
- k3s_token = coalesce(var.custom_k3s_token, random_string.random_k3s_token.result)
- api_vip = var.k3s_ha ? equinix_metal_reserved_ip_block.api_vip_addr[0].address : equinix_metal_device.all_in_one[0].network[0].address
+ token = coalesce(var.custom_token, random_string.random_token.result)
+ rancher_pass = var.custom_rancher_password != null ? coalesce(var.custom_rancher_password, random_string.random_password.result) : null
+ api_vip = var.ha ? equinix_metal_reserved_ip_block.api_vip_addr[0].address : equinix_metal_device.all_in_one[0].network[0].address
+ ingress_ip = var.ip_pool_count > 0 ? equinix_metal_reserved_ip_block.ingress_addr[0].address : ""
ip_pool_cidr = var.ip_pool_count > 0 ? equinix_metal_reserved_ip_block.ip_pool[0].cidr_notation : ""
}
-resource "random_string" "random_k3s_token" {
+resource "random_string" "random_token" {
+ length = 16
+ special = false
+}
+
+resource "random_string" "random_password" {
length = 16
special = false
}
@@ -20,32 +27,45 @@ resource "equinix_metal_device" "control_plane_master" {
operating_system = var.os
billing_cycle = "hourly"
project_id = var.metal_project_id
- count = var.k3s_ha ? 1 : 0
+ count = var.ha ? 1 : 0
description = var.cluster_name
user_data = templatefile("${path.module}/templates/user-data.tftpl", {
- k3s_token = local.k3s_token,
+ token = local.token,
API_IP = local.api_vip,
+ ingress_ip = local.ingress_ip,
global_ip_cidr = var.global_ip_cidr,
ip_pool = local.ip_pool_cidr,
- k3s_version = var.k3s_version,
+ kube_version = var.kube_version,
metallb_version = var.metallb_version,
deploy_demo = var.deploy_demo,
+ rancher_flavor = var.rancher_flavor,
+ rancher_version = var.rancher_version,
+ rancher_pass = local.rancher_pass,
node_type = "control-plane-master" })
}
resource "equinix_metal_bgp_session" "control_plane_master" {
device_id = equinix_metal_device.control_plane_master[0].id
address_family = "ipv4"
- count = var.k3s_ha ? 1 : 0
+ count = var.ha ? 1 : 0
}
resource "equinix_metal_reserved_ip_block" "api_vip_addr" {
- count = var.k3s_ha ? 1 : 0
+ count = var.ha ? 1 : 0
+ project_id = var.metal_project_id
+ metro = var.metal_metro
+ type = "public_ipv4"
+ quantity = 1
+ description = "Kubernetes API IP for the ${var.cluster_name} cluster"
+}
+
+resource "equinix_metal_reserved_ip_block" "ingress_addr" {
+ count = var.ip_pool_count > 0 ? 1 : 0
project_id = var.metal_project_id
metro = var.metal_metro
type = "public_ipv4"
quantity = 1
- description = "K3s API IP"
+ description = "Ingress IP for the ${var.cluster_name} cluster"
}
resource "equinix_metal_device" "control_plane_others" {
@@ -55,16 +75,20 @@ resource "equinix_metal_device" "control_plane_others" {
operating_system = var.os
billing_cycle = "hourly"
project_id = var.metal_project_id
- count = var.k3s_ha ? 2 : 0
+ count = var.ha ? 2 : 0
description = var.cluster_name
depends_on = [equinix_metal_device.control_plane_master]
user_data = templatefile("${path.module}/templates/user-data.tftpl", {
- k3s_token = local.k3s_token,
+ token = local.token,
API_IP = local.api_vip,
+ ingress_ip = local.ingress_ip,
global_ip_cidr = "",
ip_pool = "",
- k3s_version = var.k3s_version,
+ kube_version = var.kube_version,
metallb_version = var.metallb_version,
+ rancher_flavor = var.rancher_flavor,
+ rancher_version = var.rancher_version,
+ rancher_pass = local.rancher_pass,
deploy_demo = false,
node_type = "control-plane" })
}
@@ -72,13 +96,13 @@ resource "equinix_metal_device" "control_plane_others" {
resource "equinix_metal_bgp_session" "control_plane_second" {
device_id = equinix_metal_device.control_plane_others[0].id
address_family = "ipv4"
- count = var.k3s_ha ? 1 : 0
+ count = var.ha ? 1 : 0
}
resource "equinix_metal_bgp_session" "control_plane_third" {
device_id = equinix_metal_device.control_plane_others[1].id
address_family = "ipv4"
- count = var.k3s_ha ? 1 : 0
+ count = var.ha ? 1 : 0
}
################################################################################
@@ -91,7 +115,7 @@ resource "equinix_metal_reserved_ip_block" "ip_pool" {
quantity = var.ip_pool_count
metro = var.metal_metro
count = var.ip_pool_count > 0 ? 1 : 0
- description = "IP Pool to be used for LoadBalancers via MetalLB"
+ description = "IP Pool to be used for LoadBalancers via MetalLB on the ${var.cluster_name} cluster"
}
################################################################################
@@ -109,12 +133,16 @@ resource "equinix_metal_device" "nodes" {
description = var.cluster_name
depends_on = [equinix_metal_device.control_plane_master]
user_data = templatefile("${path.module}/templates/user-data.tftpl", {
- k3s_token = local.k3s_token,
+ token = local.token,
API_IP = local.api_vip,
+ ingress_ip = local.ingress_ip,
global_ip_cidr = "",
ip_pool = "",
- k3s_version = var.k3s_version,
+ kube_version = var.kube_version,
metallb_version = var.metallb_version,
+ rancher_flavor = var.rancher_flavor,
+ rancher_version = var.rancher_version,
+ rancher_pass = local.rancher_pass,
deploy_demo = false,
node_type = "node" })
}
@@ -130,16 +158,20 @@ resource "equinix_metal_device" "all_in_one" {
operating_system = var.os
billing_cycle = "hourly"
project_id = var.metal_project_id
- count = var.k3s_ha ? 0 : 1
+ count = var.ha ? 0 : 1
description = var.cluster_name
user_data = templatefile("${path.module}/templates/user-data.tftpl", {
- k3s_token = local.k3s_token,
+ token = local.token,
global_ip_cidr = var.global_ip_cidr,
ip_pool = local.ip_pool_cidr,
API_IP = "",
- k3s_version = var.k3s_version,
+ ingress_ip = local.ingress_ip,
+ kube_version = var.kube_version,
metallb_version = var.metallb_version,
deploy_demo = var.deploy_demo,
+ rancher_flavor = var.rancher_flavor,
+ rancher_version = var.rancher_version,
+ rancher_pass = local.rancher_pass,
node_type = "all-in-one" })
}
@@ -147,5 +179,5 @@ resource "equinix_metal_device" "all_in_one" {
resource "equinix_metal_bgp_session" "all_in_one" {
device_id = equinix_metal_device.all_in_one[0].id
address_family = "ipv4"
- count = var.k3s_ha ? 0 : 1
+ count = var.ha ? 0 : 1
}
diff --git a/modules/kube_cluster/outputs.tf b/modules/kube_cluster/outputs.tf
new file mode 100644
index 0000000..11050d5
--- /dev/null
+++ b/modules/kube_cluster/outputs.tf
@@ -0,0 +1,34 @@
+output "kube_api_ip" {
+ value = local.api_vip
+ description = "K8s API IPs"
+}
+
+output "rancher_address" {
+ value = var.rancher_flavor != "" ? "https://rancher.${local.ingress_ip}.sslip.io" : null
+ description = "Rancher URL"
+}
+
+output "rancher_password" {
+ value = var.rancher_flavor != "" ? local.rancher_pass : null
+ description = "Rancher initial password"
+}
+
+output "ingress_ip" {
+ value = var.ip_pool_count > 0 ? local.ingress_ip : null
+ description = "Ingress IP"
+}
+
+output "ip_pool_cidr" {
+ value = var.ip_pool_count > 0 ? local.ip_pool_cidr : null
+ description = "IP Pool for LoadBalancer SVCs"
+}
+
+output "nodes_details" {
+ value = {
+ for node in flatten([equinix_metal_device.control_plane_master, equinix_metal_device.control_plane_others, equinix_metal_device.nodes, equinix_metal_device.all_in_one]) : node.hostname => {
+ node_private_ipv4 = node.access_private_ipv4
+ node_public_ipv4 = node.access_public_ipv4
+ }
+ }
+ description = "Nodes external and internal IPs"
+}
diff --git a/modules/kube_cluster/templates/user-data.tftpl b/modules/kube_cluster/templates/user-data.tftpl
new file mode 100644
index 0000000..95c0fb3
--- /dev/null
+++ b/modules/kube_cluster/templates/user-data.tftpl
@@ -0,0 +1,707 @@
+#!/usr/bin/env bash
+set -euo pipefail
+
+die(){
+ echo $${1} >&2
+ exit $${2}
+}
+
+prechecks(){
+ # Set OS
+ source /etc/os-release
+ case $${ID} in
+ "debian")
+ export PKGMANAGER="apt"
+ ;;
+ "sles")
+ export PKGMANAGER="zypper"
+ ;;
+ "sle-micro")
+ export PKGMANAGER="transactional-update"
+ ;;
+ *)
+ die "Unsupported OS $${ID}" 1
+ ;;
+ esac
+ # Set ARCH
+ ARCH=$(uname -m)
+ case $${ARCH} in
+ "amd64")
+ export ARCH=amd64
+ export SUFFIX=
+ ;;
+ "x86_64")
+ export ARCH=amd64
+ export SUFFIX=
+ ;;
+ "arm64")
+ export ARCH=arm64
+ export SUFFIX=-$${ARCH}
+ ;;
+ "s390x")
+ export ARCH=s390x
+ export SUFFIX=-$${ARCH}
+ ;;
+ "aarch64")
+ export ARCH=arm64
+ export SUFFIX=-$${ARCH}
+ ;;
+ "arm*")
+ export ARCH=arm
+ export SUFFIX=-$${ARCH}hf
+ ;;
+ *)
+ die "Unsupported architecture $${ARCH}" 1
+ ;;
+ esac
+}
+
+prereqs(){
+ # Required packages
+ case $${PKGMANAGER} in
+ "apt")
+ apt update
+ apt install -y jq curl
+ ;;
+ "zypper")
+ zypper refresh
+ zypper install -y jq curl
+ ;;
+ esac
+}
+
+wait_for_kube_api(){
+ # Wait for the node to be available, meaning the K8s API is available
+ while ! kubectl wait --for condition=ready node $(cat /etc/hostname | tr '[:upper:]' '[:lower:]') --timeout=60s; do sleep 2 ; done
+}
+
+install_eco(){
+ # Wait for K3s to be up. It should be up already but just in case.
+ wait_for_kube_api
+
+ # Download helm as required to install endpoint-copier-operator
+ command -v helm || curl -fsSL https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 |bash
+
+ # Add the SUSE Edge charts and deploy ECO
+ helm repo add suse-edge https://suse-edge.github.io/charts
+ helm repo update
+ helm install --create-namespace -n endpoint-copier-operator endpoint-copier-operator suse-edge/endpoint-copier-operator
+
+ # Configure the MetalLB IP Address pool for the VIP
+ cat <<-EOF | kubectl apply -f -
+ apiVersion: metallb.io/v1beta1
+ kind: IPAddressPool
+ metadata:
+ name: kubernetes-vip-ip-pool
+ namespace: metallb-system
+ spec:
+ addresses:
+ - ${API_IP}/32
+ serviceAllocation:
+ priority: 100
+ namespaces:
+ - default
+ EOF
+
+ # Create the kubernetes-vip service that will be updated by e-c-o with the control plane hosts
+ if [[ $${KUBETYPE} == "k3s" ]]; then
+ cat <<-EOF | kubectl apply -f -
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: kubernetes-vip
+ namespace: default
+ spec:
+ internalTrafficPolicy: Cluster
+ ipFamilies:
+ - IPv4
+ ipFamilyPolicy: SingleStack
+ ports:
+ - name: k8s-api
+ port: 6443
+ protocol: TCP
+ targetPort: 6443
+ type: LoadBalancer
+ EOF
+ fi
+ if [[ $${KUBETYPE} == "rke2" ]]; then
+ cat <<-EOF | kubectl apply -f -
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: kubernetes-vip
+ namespace: default
+ spec:
+ internalTrafficPolicy: Cluster
+ ipFamilies:
+ - IPv4
+ ipFamilyPolicy: SingleStack
+ ports:
+ - name: k8s-api
+ port: 6443
+ protocol: TCP
+ targetPort: 6443
+ - name: rke2-api
+ port: 9345
+ protocol: TCP
+ targetPort: 9345
+ type: LoadBalancer
+ EOF
+ fi
+}
+
+install_metallb(){
+%{ if metallb_version != "" ~}
+ export METALLB_VERSION=${metallb_version}
+%{ else ~}
+ export METALLB_VERSION=$(curl --silent "https://api.github.com/repos/metallb/metallb/releases/latest" | jq -r .tag_name)
+%{ endif ~}
+
+ # Wait for K3s to be up. It should be up already but just in case.
+ wait_for_kube_api
+
+ # Apply the MetalLB manifest
+ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/$${METALLB_VERSION}/config/manifests/metallb-native.yaml
+
+ # Wait for MetalLB to be up
+ while ! kubectl wait --for condition=ready -n metallb-system $(kubectl get pods -n metallb-system -l component=controller -o name) --timeout=10s; do sleep 2 ; done
+
+ # In order to configure MetalLB, the metadata information is required.
+ # BGP info can take a few seconds to be populated, retry if that's the case
+ INTERNAL_IP="null"
+ while [ $${INTERNAL_IP} == "null" ]; do
+ echo "BGP data still not available..."
+ sleep 5
+ METADATA=$(curl -s https://metadata.platformequinix.com/metadata)
+ INTERNAL_IP=$(echo $${METADATA} | jq -r '.bgp_neighbors[0].customer_ip')
+ done
+ PEER_IP_1=$(echo $${METADATA} | jq -r '.bgp_neighbors[0].peer_ips[0]')
+ PEER_IP_2=$(echo $${METADATA} | jq -r '.bgp_neighbors[0].peer_ips[1]')
+ ASN=$(echo $${METADATA} | jq -r '.bgp_neighbors[0].customer_as')
+ ASN_AS=$(echo $${METADATA} | jq -r '.bgp_neighbors[0].peer_as')
+
+%{ if global_ip_cidr != "" ~}
+ # Configure the IPAddressPool for the Global IP if present
+ cat <<- EOF | kubectl apply -f -
+ apiVersion: metallb.io/v1beta1
+ kind: IPAddressPool
+ metadata:
+ name: anycast-ip
+ namespace: metallb-system
+ spec:
+ addresses:
+ - ${global_ip_cidr}
+ autoAssign: true
+ avoidBuggyIPs: false
+ serviceAllocation:
+ namespaces:
+ - ingress-nginx-global
+ priority: 100
+ serviceSelectors:
+ - matchExpressions:
+ - key: ingress-type
+ operator: In
+ values:
+ - ingress-nginx-global
+ EOF
+%{ endif ~}
+
+%{ if ingress_ip != "" ~}
+ if [ "$${KUBETYPE}" == "k3s" ]; then
+ # Configure an IPAddressPool for Ingress only
+ cat <<- EOF | kubectl apply -f -
+ apiVersion: metallb.io/v1beta1
+ kind: IPAddressPool
+ metadata:
+ name: ingress
+ namespace: metallb-system
+ spec:
+ addresses:
+ - ${ingress_ip}/32
+ serviceAllocation:
+ priority: 100
+ serviceSelectors:
+ - matchExpressions:
+ - {key: app.kubernetes.io/name, operator: In, values: [traefik]}
+ EOF
+ fi
+ if [ "$${KUBETYPE}" == "rke2" ]; then
+ # Configure an IPAddressPool for Ingress only
+ cat <<- EOF | kubectl apply -f -
+ apiVersion: metallb.io/v1beta1
+ kind: IPAddressPool
+ metadata:
+ name: ingress
+ namespace: metallb-system
+ spec:
+ addresses:
+ - ${ingress_ip}/32
+ serviceAllocation:
+ priority: 100
+ serviceSelectors:
+ - matchExpressions:
+ - {key: app.kubernetes.io/name, operator: In, values: [rke2-ingress-nginx]}
+ EOF
+ fi
+%{ endif ~}
+
+%{ if ip_pool != "" ~}
+ # Configure the IPAddressPool for the IP pool if present
+ cat <<- EOF | kubectl apply -f -
+ apiVersion: metallb.io/v1beta1
+ kind: IPAddressPool
+ metadata:
+ name: ippool
+ namespace: metallb-system
+ spec:
+ addresses:
+ - ${ip_pool}
+ autoAssign: false
+ EOF
+%{ endif ~}
+
+ # Configure the BGPPeer for each peer IP
+ cat <<- EOF | kubectl apply -f -
+ apiVersion: metallb.io/v1beta2
+ kind: BGPPeer
+ metadata:
+ name: equinix-metal-peer-1
+ namespace: metallb-system
+ spec:
+ peerASN: $${ASN_AS}
+ myASN: $${ASN}
+ peerAddress: $${PEER_IP_1}
+ sourceAddress: $${INTERNAL_IP}
+ EOF
+
+ cat <<- EOF | kubectl apply -f -
+ apiVersion: metallb.io/v1beta2
+ kind: BGPPeer
+ metadata:
+ name: equinix-metal-peer-1
+ namespace: metallb-system
+ spec:
+ peerASN: $${ASN_AS}
+ myASN: $${ASN}
+ peerAddress: $${PEER_IP_2}
+ sourceAddress: $${INTERNAL_IP}
+ EOF
+
+ # Enable the BGPAdvertisement, only to be executed in the control-plane nodes
+ cat <<- EOF | kubectl apply -f -
+ apiVersion: metallb.io/v1beta1
+ kind: BGPAdvertisement
+ metadata:
+ name: bgp-peers
+ namespace: metallb-system
+ spec:
+ nodeSelectors:
+ - matchLabels:
+ node-role.kubernetes.io/control-plane: "true"
+ EOF
+}
+
+install_k3s(){
+ # Download the K3s installer script
+ curl -L --output k3s_installer.sh https://get.k3s.io && install -m755 k3s_installer.sh /usr/local/bin/
+
+%{ if node_type == "control-plane" ~}
+ # If the node to be installed is the second or third control plane or extra nodes, wait for the API to be up
+ # Wait for the first control plane node to be up
+ while ! curl -m 10 -s -k -o /dev/null https://${API_IP}:6443 ; do echo "API still not reachable"; sleep 2 ; done
+%{ endif ~}
+%{ if node_type == "node" ~}
+ # Wait for the first control plane node to be up
+ while ! curl -m 10 -s -k -o /dev/null https://${API_IP}:6443 ; do echo "API still not reachable"; sleep 2 ; done
+%{ endif ~}
+
+ export INSTALL_K3S_SKIP_ENABLE=false
+ export INSTALL_K3S_SKIP_START=false
+ export K3S_TOKEN="${token}"
+ export NODE_IP=$(curl -s https://metadata.platformequinix.com/metadata | jq -r '.network.addresses[] | select(.public == false and .address_family == 4) |.address')
+ export NODE_EXTERNAL_IP=$(curl -s https://metadata.platformequinix.com/metadata | jq -r '.network.addresses[] | select(.public == true and .address_family == 4) |.address')
+%{ if node_type == "all-in-one" ~}
+%{ if global_ip_cidr != "" ~}
+ export INSTALL_K3S_EXEC="server --write-kubeconfig-mode=644 --disable=servicelb --node-ip $${NODE_IP} --node-external-ip $${NODE_EXTERNAL_IP}"
+%{ else ~}
+%{ if ip_pool != "" ~}
+ export INSTALL_K3S_EXEC="server --write-kubeconfig-mode=644 --disable=servicelb --node-ip $${NODE_IP} --node-external-ip $${NODE_EXTERNAL_IP}"
+%{ else ~}
+ export INSTALL_K3S_EXEC="server --write-kubeconfig-mode=644 --node-ip $${NODE_IP} --node-external-ip $${NODE_EXTERNAL_IP}"
+%{ endif ~}
+%{ endif ~}
+%{ endif ~}
+%{ if node_type == "control-plane-master" ~}
+ export INSTALL_K3S_EXEC="server --cluster-init --write-kubeconfig-mode=644 --tls-san=${API_IP} --tls-san=${API_IP}.sslip.io --disable=servicelb --node-ip $${NODE_IP} --node-external-ip $${NODE_EXTERNAL_IP}"
+%{ endif ~}
+%{ if node_type == "control-plane" ~}
+ export INSTALL_K3S_EXEC="server --server https://${API_IP}:6443 --write-kubeconfig-mode=644 --node-ip $${NODE_IP} --node-external-ip $${NODE_EXTERNAL_IP}"
+%{ endif ~}
+%{ if node_type == "node" ~}
+ export INSTALL_K3S_EXEC="agent --server https://${API_IP}:6443 --node-ip $${NODE_IP} --node-external-ip $${NODE_EXTERNAL_IP}"
+%{ endif ~}
+%{ if kube_version != "" ~}
+ export INSTALL_K3S_VERSION="${kube_version}"
+%{ endif ~}
+ /usr/local/bin/k3s_installer.sh
+}
+
+install_rke2(){
+ # Download the RKE2 installer script
+ curl -L --output rke2_installer.sh https://get.rke2.io && install -m755 rke2_installer.sh /usr/local/bin/
+
+ # RKE2 configuration is set via config.yaml file
+ mkdir -p /etc/rancher/rke2/
+
+%{ if node_type == "control-plane" ~}
+ # If the node to be installed is the second or third control plane or extra nodes, wait for the API to be up
+ # Wait for the first control plane node to be up
+ while ! curl -m 10 -s -k -o /dev/null https://${API_IP}:6443 ; do echo "API still not reachable"; sleep 2 ; done
+%{ endif ~}
+%{ if node_type == "node" ~}
+ # Wait for the first control plane node to be up
+ while ! curl -m 10 -s -k -o /dev/null https://${API_IP}:6443 ; do echo "API still not reachable"; sleep 2 ; done
+%{ endif ~}
+
+ export RKE2_TOKEN="${token}"
+ export NODE_IP=$(curl -s https://metadata.platformequinix.com/metadata | jq -r '.network.addresses[] | select(.public == false and .address_family == 4) |.address')
+ export NODE_EXTERNAL_IP=$(curl -s https://metadata.platformequinix.com/metadata | jq -r '.network.addresses[] | select(.public == true and .address_family == 4) |.address')
+%{ if node_type == "all-in-one" ~}
+ export INSTALL_RKE2_TYPE="server"
+ cat <<- EOF >> /etc/rancher/rke2/config.yaml
+ token: $${RKE2_TOKEN}
+ write-kubeconfig-mode: "0644"
+ node-ip: $${NODE_IP}
+ node-external-ip: $${NODE_EXTERNAL_IP}
+ EOF
+%{ endif ~}
+%{ if node_type == "control-plane-master" ~}
+ export INSTALL_RKE2_TYPE="server"
+ cat <<- EOF >> /etc/rancher/rke2/config.yaml
+ token: $${RKE2_TOKEN}
+ write-kubeconfig-mode: "0644"
+ node-ip: $${NODE_IP}
+ node-external-ip: $${NODE_EXTERNAL_IP}
+ tls-san:
+ - "${API_IP}"
+ - "${API_IP}.sslip.io"
+ EOF
+%{ endif ~}
+%{ if node_type == "control-plane" ~}
+ export INSTALL_RKE2_TYPE="server"
+ cat <<- EOF >> /etc/rancher/rke2/config.yaml
+ server: https://${API_IP}:9345
+ token: $${RKE2_TOKEN}
+ write-kubeconfig-mode: "0644"
+ node-ip: $${NODE_IP}
+ node-external-ip: $${NODE_EXTERNAL_IP}
+ EOF
+%{ endif ~}
+%{ if node_type == "node" ~}
+ export INSTALL_RKE2_TYPE="agent"
+ cat <<- EOF >> /etc/rancher/rke2/config.yaml
+ server: https://${API_IP}:9345
+ token: $${RKE2_TOKEN}
+ write-kubeconfig-mode: "0644"
+ node-ip: $${NODE_IP}
+ node-external-ip: $${NODE_EXTERNAL_IP}
+ EOF
+%{ endif ~}
+%{ if ingress_ip != "" ~}
+ mkdir -p /var/lib/rancher/rke2/server/manifests/
+ cat <<- EOF >> /var/lib/rancher/rke2/server/manifests/rke2-ingress-config.yaml
+ apiVersion: helm.cattle.io/v1
+ kind: HelmChartConfig
+ metadata:
+ name: rke2-ingress-nginx
+ namespace: kube-system
+ spec:
+ valuesContent: |-
+ controller:
+ config:
+ use-forwarded-headers: "true"
+ enable-real-ip: "true"
+ publishService:
+ enabled: true
+ service:
+ enabled: true
+ type: LoadBalancer
+ externalTrafficPolicy: Local
+ EOF
+%{ endif ~}
+%{ if kube_version != "" ~}
+ export INSTALL_RKE2_VERSION="${kube_version}"
+%{ endif ~}
+ /usr/local/bin/rke2_installer.sh
+ systemctl enable --now rke2-$${INSTALL_RKE2_TYPE}
+}
+
+deploy_demo(){
+ # Check if the demo is already deployed
+ if kubectl get deployment -n hello-kubernetes hello-kubernetes -o name > /dev/null 2>&1; then exit 0; fi
+
+ if [ "$${KUBETYPE}" == "rke2" ]; then
+ # Wait for the rke2-ingress-nginx-controller DS to be available if using RKE2
+ while ! kubectl rollout status daemonset -n kube-system rke2-ingress-nginx-controller --timeout=60s; do sleep 2 ; done
+ fi
+ # I cannot make split work in Terraform templates
+ IP=$(echo ${global_ip_cidr} | cut -d/ -f1)
+ cat <<- EOF | kubectl apply -f -
+ ---
+ apiVersion: v1
+ kind: Namespace
+ metadata:
+ name: hello-kubernetes
+ ---
+ apiVersion: v1
+ kind: ServiceAccount
+ metadata:
+ name: hello-kubernetes
+ namespace: hello-kubernetes
+ labels:
+ app.kubernetes.io/name: hello-kubernetes
+ ---
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: hello-kubernetes
+ namespace: hello-kubernetes
+ labels:
+ app.kubernetes.io/name: hello-kubernetes
+ spec:
+ type: ClusterIP
+ ports:
+ - port: 80
+ targetPort: http
+ protocol: TCP
+ name: http
+ selector:
+ app.kubernetes.io/name: hello-kubernetes
+ ---
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: hello-kubernetes
+ namespace: hello-kubernetes
+ labels:
+ app.kubernetes.io/name: hello-kubernetes
+ spec:
+ replicas: 2
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: hello-kubernetes
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: hello-kubernetes
+ spec:
+ serviceAccountName: hello-kubernetes
+ containers:
+ - name: hello-kubernetes
+ image: "paulbouwer/hello-kubernetes:1.10"
+ imagePullPolicy: IfNotPresent
+ ports:
+ - name: http
+ containerPort: 8080
+ protocol: TCP
+ livenessProbe:
+ httpGet:
+ path: /
+ port: http
+ readinessProbe:
+ httpGet:
+ path: /
+ port: http
+ env:
+ - name: HANDLER_PATH_PREFIX
+ value: ""
+ - name: RENDER_PATH_PREFIX
+ value: ""
+ - name: KUBERNETES_NAMESPACE
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.namespace
+ - name: KUBERNETES_POD_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.name
+ - name: KUBERNETES_NODE_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: spec.nodeName
+ - name: CONTAINER_IMAGE
+ value: "paulbouwer/hello-kubernetes:1.10"
+ ---
+ apiVersion: networking.k8s.io/v1
+ kind: Ingress
+ metadata:
+ name: hello-kubernetes-ingress
+ namespace: hello-kubernetes
+ spec:
+ ingressClassName: ingress-nginx-global
+ rules:
+ - host: hellok3s.$${IP}.sslip.io
+ http:
+ paths:
+ - path: "/"
+ pathType: Prefix
+ backend:
+ service:
+ name: hello-kubernetes
+ port:
+ name: http
+ EOF
+}
+
+install_rancher(){
+ # Wait for Kube API to be up. It should be up already but just in case.
+ wait_for_kube_api
+
+ # Download helm as required to install Rancher
+ command -v helm || curl -fsSL https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 |bash
+
+ # Get latest Cert-manager version
+ CMVERSION=$(curl -s "https://api.github.com/repos/cert-manager/cert-manager/releases/latest" | jq -r '.tag_name')
+
+ RANCHERFLAVOR=${rancher_flavor}
+ # https://ranchermanager.docs.rancher.com/pages-for-subheaders/install-upgrade-on-a-kubernetes-cluster
+ case $${RANCHERFLAVOR} in
+ "latest" | "stable" | "alpha")
+ helm repo add rancher https://releases.rancher.com/server-charts/$${RANCHERFLAVOR}
+ ;;
+ "prime")
+ helm repo add rancher https://charts.rancher.com/server-charts/prime
+ ;;
+ *)
+ echo "Rancher flavor not detected, using latest"
+ helm repo add rancher https://releases.rancher.com/server-charts/latest
+ ;;
+ esac
+
+ helm repo add jetstack https://charts.jetstack.io
+ helm repo update
+
+ # Install the cert-manager Helm chart
+ helm install cert-manager jetstack/cert-manager \
+ --namespace cert-manager \
+ --create-namespace \
+ --set crds.enabled=true \
+ --version $${CMVERSION}
+
+ IP=""
+ # https://github.com/rancher/rke2/issues/3958
+ if [ "$${KUBETYPE}" == "rke2" ]; then
+ # Wait for the rke2-ingress-nginx-controller DS to be available if using RKE2
+ while ! kubectl rollout status daemonset -n kube-system rke2-ingress-nginx-controller --timeout=60s; do sleep 2 ; done
+ IP=$(kubectl get svc -n kube-system rke2-ingress-nginx-controller -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
+ fi
+
+ # Get the IP of the ingress object if provided
+ if [ "$${KUBETYPE}" == "k3s" ]; then
+ IP=$(kubectl get svc -n kube-system traefik -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
+ fi
+
+ if [[ $${IP} == "" ]]; then
+ # Just use internal IPs
+ IP=$(hostname -I | awk '{print $1}')
+ fi
+
+ # Install rancher using sslip.io as hostname and with just a single replica
+ helm install rancher rancher/rancher \
+ --namespace cattle-system \
+ --create-namespace \
+ --set hostname=rancher.$${IP}.sslip.io \
+ --set bootstrapPassword="${rancher_pass}" \
+ --set replicas=1 \
+ --set global.cattle.psp.enabled=false %{ if rancher_version != "" ~}--version "${rancher_version}"%{ endif ~}
+
+ while ! kubectl wait --for condition=ready -n cattle-system $(kubectl get pods -n cattle-system -l app=rancher -o name) --timeout=10s; do sleep 2 ; done
+}
+
+install_global_ingress(){
+ command -v helm || curl -fsSL https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 |bash
+
+ cat <<- EOF > ingress-nginx-global.yaml
+ controller:
+ ingressClassResource:
+ name: ingress-nginx-global
+ controllerValue: k8s.io/ingress-nginx-global
+ service:
+ labels:
+ ingress-type: ingress-nginx-global
+ admissionWebhooks:
+ enabled: false
+ EOF
+
+ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
+ helm repo update
+ helm install -f ingress-nginx-global.yaml ingress-nginx-global --namespace ingress-nginx-global --create-namespace ingress-nginx/ingress-nginx
+}
+
+prechecks
+prereqs
+
+if [[ "${kube_version}" =~ .*"k3s".* ]] || [[ "${kube_version}" == "" ]]; then
+ export KUBETYPE="k3s"
+ export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
+ echo "export KUBECONFIG=/etc/rancher/k3s/k3s.yaml" >> /etc/profile.d/k3s.sh
+ install_k3s
+ mkdir -p /root/.kube/
+ ln -s /etc/rancher/k3s/k3s.yaml /root/.kube/config
+elif [[ "${kube_version}" =~ .*"rke2".* ]]; then
+ export KUBETYPE="rke2"
+ ln -s /var/lib/rancher/rke2/bin/kubectl /usr/local/bin/kubectl
+ export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
+ echo "export KUBECONFIG=/etc/rancher/rke2/rke2.yaml" >> /etc/profile.d/rke2.sh
+ install_rke2
+ mkdir -p /root/.kube/
+ ln -s /etc/rancher/rke2/rke2.yaml /root/.kube/config
+else
+ die "Kubernetes version ${kube_version} not valid" 2
+fi
+
+DEPLOY_DEMO=false
+INSTALL_METALLB=false
+INSTALL_RANCHER=false
+INSTALL_GLOBAL_INGRESS=false
+
+%{ if node_type == "control-plane-master" ~}
+INSTALL_METALLB=true
+%{ if global_ip_cidr != "" ~}
+INSTALL_GLOBAL_INGRESS=true
+%{ endif ~}
+%{ if deploy_demo != "false" ~}
+DEPLOY_DEMO=true
+%{ endif ~}
+%{ if rancher_flavor != "" ~}
+INSTALL_RANCHER=true
+%{ endif ~}
+%{ endif ~}
+
+%{ if node_type == "all-in-one" ~}
+%{ if global_ip_cidr != "" ~}
+INSTALL_METALLB=true
+INSTALL_GLOBAL_INGRESS=true
+%{ endif }
+%{ if ip_pool != "" ~}
+INSTALL_METALLB=true
+%{ endif }
+%{ if deploy_demo != "false" ~}
+DEPLOY_DEMO=true
+%{ endif ~}
+%{ if rancher_flavor != "" ~}
+INSTALL_RANCHER=true
+%{ endif ~}
+%{ endif ~}
+
+[ $${INSTALL_METALLB} == true ] && install_metallb || true
+
+%{ if API_IP != "" ~}
+%{ if node_type == "control-plane-master" ~}
+install_eco
+%{ endif ~}
+%{ endif ~}
+
+[ $${INSTALL_GLOBAL_INGRESS} == true ] && install_global_ingress || true
+[ $${DEPLOY_DEMO} == true ] && deploy_demo || true
+[ $${INSTALL_RANCHER} == true ] && install_rancher || true
diff --git a/modules/k3s_cluster/variables.tf b/modules/kube_cluster/variables.tf
similarity index 57%
rename from modules/k3s_cluster/variables.tf
rename to modules/kube_cluster/variables.tf
index c3860a4..9cddb34 100644
--- a/modules/k3s_cluster/variables.tf
+++ b/modules/kube_cluster/variables.tf
@@ -17,30 +17,30 @@ variable "deploy_demo" {
variable "cluster_name" {
type = string
description = "Cluster name"
- default = "K3s cluster"
+ default = "Cluster"
}
variable "plan_control_plane" {
type = string
- description = "K3s control plane type/size"
+ description = "Control plane type/size"
default = "c3.small.x86"
}
variable "plan_node" {
type = string
- description = "K3s node type/size"
+ description = "Node type/size"
default = "c3.small.x86"
}
variable "node_count" {
type = number
- description = "Number of K3s nodes"
+ description = "Number of nodes"
default = "0"
}
-variable "k3s_ha" {
+variable "ha" {
type = bool
- description = "K3s HA (aka 3 control plane nodes)"
+ description = "HA (aka 3 control plane nodes)"
default = false
}
@@ -62,16 +62,20 @@ variable "node_hostnames" {
default = "node"
}
-variable "custom_k3s_token" {
+variable "custom_token" {
type = string
- description = "K3s token used for nodes to join the cluster (autogenerated otherwise)"
+ description = "Token used for nodes to join the cluster (autogenerated otherwise)"
default = null
}
variable "ip_pool_count" {
type = number
- description = "Number of public IPv4 per metro to be used as LoadBalancers with MetalLB"
+ description = "Number of public IPv4 per metro to be used as LoadBalancers with MetalLB (it needs to be power of 2 between 0 and 256 as required by Equinix Metal)"
default = 0
+ validation {
+ condition = contains([0, 1, 2, 4, 8, 16, 32, 64, 128, 256], var.ip_pool_count)
+ error_message = "The value must be a power of 2 between 0 and 256."
+ }
}
variable "global_ip_cidr" {
@@ -80,9 +84,9 @@ variable "global_ip_cidr" {
default = null
}
-variable "k3s_version" {
+variable "kube_version" {
type = string
- description = "K3s version to be installed. Empty for latest"
+ description = "K3s/RKE2 version to be installed. Empty for latest K3s"
default = ""
}
@@ -91,3 +95,21 @@ variable "metallb_version" {
description = "MetalLB version to be installed. Empty for latest"
default = ""
}
+
+variable "rancher_version" {
+ type = string
+ description = "Rancher version to be installed (vX.Y.Z). Empty for latest"
+ default = ""
+}
+
+variable "rancher_flavor" {
+ type = string
+ description = "Rancher flavor to be installed (prime, latest, stable or alpha). Empty to not install it"
+ default = ""
+}
+
+variable "custom_rancher_password" {
+ type = string
+ description = "Rancher initial password (autogenerated if not provided)"
+ default = null
+}
diff --git a/modules/k3s_cluster/versions.tf b/modules/kube_cluster/versions.tf
similarity index 100%
rename from modules/k3s_cluster/versions.tf
rename to modules/kube_cluster/versions.tf
diff --git a/outputs.tf b/outputs.tf
index 432a199..07a3c7b 100644
--- a/outputs.tf
+++ b/outputs.tf
@@ -8,9 +8,25 @@ output "demo_url" {
description = "URL of the demo application to demonstrate a global IP shared across Metros"
}
-output "k3s_api" {
+output "cluster_details" {
value = {
- for cluster in var.clusters : cluster.name => module.k3s_cluster[cluster.name].k3s_api_ip
+ for cluster in var.clusters : cluster.name => {
+ api = module.kube_cluster[cluster.name].kube_api_ip
+ ingress = module.kube_cluster[cluster.name].ingress_ip
+ ip_pool_cidr = module.kube_cluster[cluster.name].ip_pool_cidr
+ nodes = module.kube_cluster[cluster.name].nodes_details
+ }
}
- description = "List of Clusters => K3s APIs"
+ description = "List of Clusters => K8s details"
+}
+
+output "rancher_urls" {
+ value = {
+ for cluster in var.clusters : cluster.name => {
+ rancher_url = cluster.rancher_flavor != "" ? module.kube_cluster[cluster.name].rancher_address : null
+ rancher_initial_password_base64 = cluster.rancher_flavor != "" ? base64encode(module.kube_cluster[cluster.name].rancher_password) : null
+ }
+ if module.kube_cluster[cluster.name].rancher_address != null
+ }
+ description = "List of Clusters => Rancher details"
}
diff --git a/rancher-clusters-imported.png b/rancher-clusters-imported.png
new file mode 100644
index 0000000..9aca9db
Binary files /dev/null and b/rancher-clusters-imported.png differ
diff --git a/variables.tf b/variables.tf
index 490abdb..083a385 100644
--- a/variables.tf
+++ b/variables.tf
@@ -13,24 +13,31 @@ variable "deploy_demo" {
type = bool
description = "Deploys a simple demo using a global IP as ingress and a hello-kubernetes pods"
default = false
+ validation {
+ condition = !var.deploy_demo || var.global_ip
+ error_message = "When deploy_demo is true, global_ip must be true as well."
+ }
}
variable "clusters" {
- description = "K3s cluster definition"
+ description = "Cluster definition"
type = list(object({
- name = optional(string, "K3s demo cluster")
+ name = optional(string, "Demo cluster")
metro = optional(string, "FR")
plan_control_plane = optional(string, "c3.small.x86")
plan_node = optional(string, "c3.small.x86")
node_count = optional(number, 0)
- k3s_ha = optional(bool, false)
+ ha = optional(bool, false)
os = optional(string, "debian_11")
- control_plane_hostnames = optional(string, "k3s-cp")
- node_hostnames = optional(string, "k3s-node")
- custom_k3s_token = optional(string, "")
+ control_plane_hostnames = optional(string, "cp")
+ node_hostnames = optional(string, "node")
+ custom_token = optional(string, "")
ip_pool_count = optional(number, 0)
- k3s_version = optional(string, "")
+ kube_version = optional(string, "")
metallb_version = optional(string, "")
+ rancher_flavor = optional(string, "")
+ rancher_version = optional(string, "")
+ custom_rancher_password = optional(string, "")
}))
default = [{}]
}
diff --git a/versions.tf b/versions.tf
index 3cc65f8..3050f56 100644
--- a/versions.tf
+++ b/versions.tf
@@ -1,5 +1,5 @@
terraform {
- required_version = ">= 1.3"
+ required_version = ">= 1.9"
required_providers {
equinix = {
source = "equinix/equinix"