Skip to content

Latest commit

 

History

History
229 lines (171 loc) · 11.8 KB

getting_started.md

File metadata and controls

229 lines (171 loc) · 11.8 KB

Getting Started

This is a guide on how to get started with Cluster API Provider vSphere. To learn more about cluster API in more depth, check out the the Cluster API book.

Install Requirements

  • clusterctl, which can downloaded the latest release of Cluster API (CAPI) on GitHub.
  • Docker is required for the bootstrap cluster using clusterctl.
  • Kind can be used to provide an initial management cluster for testing.
  • kubectl is required to access your workload clusters.

vSphere Requirements

Your vSphere environment should be configured with a DHCP service in the primary VM Network for your workload Kubernetes clusters. You will also need to configure one resource pool across the hosts onto which the workload clusters will be provisioned. Every host in the resource pool will need access to shared storage, such as VSAN in order to be able to make use of MachineDeployments and high-availability control planes.

To use PersistentVolumes (PV), your cluster needs support for Cloud Native Storage (CNS), which is available in vSphere 6.7 Update 3 and later. CNS relies on a shared datastore, such as VSAN.

In addition, to use clusterctl, you should have a SSH public key that will be inserted into the node VMs for administrative access, and a VM folder configured in vCenter.

vCenter Credentials

In order for clusterctl to bootstrap a management cluster on vSphere, it must be able to connect and authenticate to vCenter. Ensure you have credentials to your vCenter server (user, password and server URL).

Uploading the machine images

It is required that machines provisioned by CAPV have cloudinit, kubeadm and a container runtime pre-installed. You can use one of the CAPV machine images generated by SIG Cluster Lifecycle as a VM template.

The machine images are retrievable from public URLs. CAPV currently supports machine images based on Ubuntu 18.04 and CentOS 7. A list of published machine images is available here. For this guide we'll be deploying Kubernetes v1.17.3 on Ubuntu 18.04 (link to machine image).

If you want to build your own image, take a look at the image-builder project.

Deploy a VM from an OVA using the OVA URL.

NOTE: The rest of the guide will assume you named the VM ubuntu-1804-kube-v1.17.3.

After deploying it from the OVA, please ensure the VM ubuntu-1804-kube-v1.17.3 is marked as an immutable template with the following command:

govc vm.markastemplate ubuntu-1804-kube-v1.17.3

To reduce the time it takes to provisioning machines, linked clone mode is the default cloneMode for vsphereMachines and is highly recommended. To be able to use it, your VM templates require snapshots, for which we illustrate the process using the govc command line tool, but can also be done via vCenter, PowerCLI or other tooling. For more info about govc see installation and usage docs:

# Re-mark the template as a VM
govc vm.markasvm -pool Compute-ResourcePool ubuntu-1804-kube-v1.17.3
# Take a snapshot of the VM
govc snapshot.create -vm ubuntu-1804-kube-v1.17.3 root
# Re-mark the VM as a template
govc vm.markastemplate ubuntu-1804-kube-v1.17.3

Note: When creating the OVA template via vSphere using the URL method, please make sure the VM template name is the same as the value specified by the VSPHERE_TEMPLATE environment variable in the ~/.cluster-api/clusterctl.yaml file, taking care of the .ova suffix for the template name.

Note: If you are planning to use CNS/CSI then you will need to ensure that the template is at least at VM Hardware Version 13, This is done out-of-the-box for images of K8s version v1.15.4 and above. For versions lower than this you will need to upgrade the VMHW either in the UI or with govc as such:

govc vm.upgrade -version=13 -vm ubuntu-1804-kube-v1.16.3

Creating a test management cluster

NOTE: You will need an initial management cluster to run the Cluster API components. This can be any 1.16+ Kubernetes cluster. If you are testing locally, you can use Kind with the following command:

kind create cluster

Configuring and installing Cluster API Provider vSphere in a management cluster

To initialize Cluster API Provider vSphere, clusterctl requires the following variables, which should be set in ~/.cluster-api/clusterctl.yaml as the following:

## -- Controller settings -- ##
VSPHERE_USERNAME: "[email protected]"                    # The username used to access the remote vSphere endpoint
VSPHERE_PASSWORD: "admin!23"                                  # The password used to access the remote vSphere endpoint

## -- Required workload cluster default settings -- ##
VSPHERE_SERVER: "10.0.0.1"                                    # The vCenter server IP or FQDN
VSPHERE_DATACENTER: "SDDC-Datacenter"                         # The vSphere datacenter to deploy the management cluster on
VSPHERE_DATASTORE: "DefaultDatastore"                         # The vSphere datastore to deploy the management cluster on
VSPHERE_NETWORK: "VM Network"                                 # The VM network to deploy the management cluster on
VSPHERE_RESOURCE_POOL: "*/Resources"                          # The vSphere resource pool for your VMs
VSPHERE_FOLDER: "vm"                                          # The VM folder for your VMs. Set to "" to use the root vSphere folder
VSPHERE_TEMPLATE: "ubuntu-1804-kube-v1.17.3"                  # The VM template to use for your management cluster.
CONTROL_PLANE_ENDPOINT_IP: "192.168.9.230"                    # the IP that kube-vip is going to use as a control plane endpoint
VIP_NETWORK_INTERFACE: "ens192"                               # The interface that kube-vip should apply the IP to. Omit to tell kube-vip to autodetect the interface.
VSPHERE_TLS_THUMBPRINT: "..."                                 # sha1 thumbprint of the vcenter certificate: openssl x509 -sha1 -fingerprint -in ca.crt -noout
EXP_CLUSTER_RESOURCE_SET: "true"                              # This enables the ClusterResourceSet feature that we are using to deploy CSI
VSPHERE_SSH_AUTHORIZED_KEY: "ssh-rsa AAAAB3N..."              # The public ssh authorized key on all machines
                                                              #   in this cluster.
                                                              #   Set to "" if you don't want to enable SSH,
                                                              #   or are using another solution.
VSPHERE_STORAGE_POLICY: ""                                    # This is the vSphere storage policy.
                                                              #  Set it to "" if you don't want to use a storage policy.

If you are using the DEPRECATED haproxy flavour you will need to add the following variable to your clusterctl.yaml:

VSPHERE_HAPROXY_TEMPLATE: "capv-haproxy-v0.6.4"               # The VM template to use for the HAProxy load balancer

NOTE: Technically, SSH keys and vSphere folders are optional, but optional template variables are not currently supported by clusterctl. If you need to not set the vSphere folder or SSH keys, then remove the appropriate fields after running clusterctl generate.

the CONTROL_PLANE_ENDPOINT_IP is an IP that must be an IP on the same subnet as the control plane machines, it should be also an IP that is not part of your DHCP range

CONTROL_PLANE_ENDPOINT_IP is mandatory when you are using the default and the external-loadbalancer flavour

the EXP_CLUSTER_RESOURCE_SET is required if you want to deploy CSI using cluster resource sets (mandatory in the default flavor).

Setting VSPHERE_USERNAME and VSPHERE_PASSWORD is one way to manage identities. For the full set of options see identity management.

Once you have access to a management cluster, you can instantiate Cluster API with the following:

clusterctl init --infrastructure vsphere

Creating a vSphere-based workload cluster

The following command

$ clusterctl generate cluster vsphere-quickstart \
    --infrastructure vsphere \
    --kubernetes-version v1.17.3 \
    --control-plane-machine-count 1 \
    --worker-machine-count 3 > cluster.yaml

# Inspect and make any changes
$ vi cluster.yaml

# Create the workload cluster in the current namespace on the management cluster
$ kubectl apply -f cluster.yaml

aside of the default flavour, CAPV has the following:

  • an external-loadbalancer flavour that enables you to to specify a pre-existing endpoint
  • DEPRECATED an haproxy flavour to use HAProxy as a control plane endpoint

Accessing the workload cluster

The kubeconfig for the workload cluster will be stored in a secret, which can be retrieved using:

$ kubectl get secret/vsphere-quickstart-kubeconfig -o json \
  | jq -r .data.value \
  | base64 --decode \
  > ./vsphere-quickstart.kubeconfig

The kubeconfig can then be used to apply a CNI for networking, for example, Calico:

KUBECONFIG=vsphere-quickstart.kubeconfig kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

after that you should see your nodes turn into ready:

$ KUBECONFIG=vsphere-quickstart.kubeconfig kubectl get nodes
NAME                                                          STATUS     ROLES    AGE   VERSION
vsphere-quickstart-9qtfd                                      Ready      master   47m   v1.17.3

Custom cluster templates

the provided cluster templates are quickstarts. If you need anything specific that requires a more complex setup, we recommand to use custom templates:

$ clusterctl generate custom-cluster vsphere-quickstart \
    --infrastructure vsphere \
    --kubernetes-version v1.17.3 \
    --control-plane-machine-count 1 \
    --worker-machine-count 3 \
    --from ~/workspace/custom-cluster-template.yaml > custom-cluster.yaml