Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 Update getting started notes #448

Merged
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
72 changes: 46 additions & 26 deletions docs/book/src/01_user/01_getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,9 +53,11 @@ To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
No additional steps are required and you can install the RKE2 provider with **clusterctl** directly:

```bash
clusterctl init --bootstrap rke2 --control-plane rke2 --infrastructure docker
clusterctl init --core cluster-api:v1.7.6 --bootstrap rke2:v0.7.0 --control-plane rke2:v0.7.0 --infrastructure docker:v1.7.6
furkatgofurov7 marked this conversation as resolved.
Show resolved Hide resolved
```

Next, you can proceed to [creating a workload cluster](#create-a-workload-cluster).

### CAPI < v1.6.0

With CAPI & clusterctl versions less than v1.6.0 you need a specific configuration. To do this create a file called `clusterctl.yaml` in the `$HOME/.cluster-api` folder with the following content (substitute `${VERSION}` with a valid semver specification - e.g. v0.5.0 - from [releases](https://github.com/rancher/cluster-api-provider-rke2/releases)):
Expand Down Expand Up @@ -106,13 +108,14 @@ You can now create your first workload cluster by running the following:

There are some sample cluster templates available under the `samples` folder. This section assumes you are using CAPI v1.6.0 or higher.

For this `Getting Started` section, we will be using the `docker` samples available under `samples/docker/oneline-default` folder. This folder contains a YAML template file called `rke2-sample.yaml` which contains environment variable placeholders which can be substituted using the [envsubst](https://github.com/a8m/envsubst/releases) tool. We will use `clusterctl` to generate the manifests from these template files.
For this `Getting Started` section, we will be using the `docker` samples available under `samples/docker/online-default` folder. This folder contains a YAML template file called `rke2-sample.yaml` which contains environment variable placeholders which can be substituted using the [envsubst](https://github.com/a8m/envsubst/releases) tool. We will use `clusterctl` to generate the manifests from these template files.
Set the following environment variables:
- CABPR_NAMESPACE
- CLUSTER_NAME
- CABPR_CP_REPLICAS
- CABPR_WK_REPLICAS
- KUBERNETES_VERSION
- KIND_IMAGE_VERSION

for example:

Expand All @@ -121,12 +124,14 @@ export CABPR_NAMESPACE=example
export CLUSTER_NAME=capd-rke2-test
export CABPR_CP_REPLICAS=3
export CABPR_WK_REPLICAS=2
export KUBERNETES_VERSION=v1.24.6
export KUBERNETES_VERSION=v1.27.3
export KIND_IMAGE_VERSION=v1.27.3
```

The next step is to substitue the values in the YAML using the following commands:

```bash
cd samples/docker/online-default
cat rke2-sample.yaml | clusterctl generate yaml > rke2-docker-example.yaml
```

Expand All @@ -135,16 +140,18 @@ At this moment, you can take some time to study the resulting YAML, then you can
```bash
kubectl apply -f rke2-docker-example.yaml
```

and see the following output:

```
namespace/example created
furkatgofurov7 marked this conversation as resolved.
Show resolved Hide resolved
cluster.cluster.x-k8s.io/rke2-test created
dockercluster.infrastructure.cluster.x-k8s.io/rke2-test created
rke2controlplane.controlplane.cluster.x-k8s.io/rke2-test-control-plane created
cluster.cluster.x-k8s.io/capd-rke2-test created
dockercluster.infrastructure.cluster.x-k8s.io/capd-rke2-test created
rke2controlplane.controlplane.cluster.x-k8s.io/capd-rke2-test-control-plane created
dockermachinetemplate.infrastructure.cluster.x-k8s.io/controlplane created
machinedeployment.cluster.x-k8s.io/worker-md-0 created
dockermachinetemplate.infrastructure.cluster.x-k8s.io/worker created
rke2configtemplate.bootstrap.cluster.x-k8s.io/rke2-test-agent created
rke2configtemplate.bootstrap.cluster.x-k8s.io/capd-rke2-test-agent created
configmap/capd-rke2-test-lb-config created
```

## Checking the workload cluster
Expand All @@ -156,38 +163,51 @@ kubectl get machine -n example
```

and you should see output similar to the following:

```
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
capd-rke2-test-control-plane-4njnd capd-rke2-test capd-rke2-test-control-plane-4njnd docker:////capd-rke2-test-control-plane-4njnd Running 5m18s v1.24.6
capd-rke2-test-control-plane-rccsk capd-rke2-test capd-rke2-test-control-plane-rccsk docker:////capd-rke2-test-control-plane-rccsk Running 3m1s v1.24.6
capd-rke2-test-control-plane-v5g8v capd-rke2-test capd-rke2-test-control-plane-v5g8v docker:////capd-rke2-test-control-plane-v5g8v Running 8m4s v1.24.6
worker-md-0-6d4944f5b6-k5xxw capd-rke2-test capd-rke2-test-worker-md-0-6d4944f5b6-k5xxw docker:////capd-rke2-test-worker-md-0-6d4944f5b6-k5xxw Running 8m6s v1.24.6
worker-md-0-6d4944f5b6-qjbjh capd-rke2-test capd-rke2-test-worker-md-0-6d4944f5b6-qjbjh docker:////capd-rke2-test-worker-md-0-6d4944f5b6-qjbjh Running 8m6s v1.24.6
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
capd-rke2-test-control-plane-9fw9t capd-rke2-test capd-rke2-test-control-plane-9fw9t docker:////capd-rke2-test-control-plane-9fw9t Running 35m v1.27.3+rke2r1
capd-rke2-test-control-plane-m2sdk capd-rke2-test capd-rke2-test-control-plane-m2sdk docker:////capd-rke2-test-control-plane-m2sdk Running 12m v1.27.3+rke2r1
capd-rke2-test-control-plane-zk2xb capd-rke2-test capd-rke2-test-control-plane-zk2xb docker:////capd-rke2-test-control-plane-zk2xb Running 27m v1.27.3+rke2r1
worker-md-0-fhxrw-crn5g capd-rke2-test capd-rke2-test-worker-md-0-fhxrw-crn5g docker:////capd-rke2-test-worker-md-0-fhxrw-crn5g Running 36m v1.27.3+rke2r1
worker-md-0-fhxrw-qsk7n capd-rke2-test capd-rke2-test-worker-md-0-fhxrw-qsk7n docker:////capd-rke2-test-worker-md-0-fhxrw-qsk7n Running 36m v1.27.3+rke2r1
```

You can now get the kubeconfig file for the workload cluster using :
## Accessing the workload cluster

Once cluster is fully provisioned, you can check its status with:

```bash
clusterctl get kubeconfig capd-rke2-test -n example > ~/capd-rke2-test-kubeconfig.yaml
export KUBECONFIG=~/capd-rke2-test-kubeconfig.yaml
```
kubectl get cluster -n example
```

and query the newly created cluster using:
and see an output similar to this:

```bash
kubectl cluster-info
```
NAMESPACE NAME CLUSTERCLASS PHASE AGE VERSION
example capd-rke2-test Provisioned 31m
```

and see output like this:
You can also get an “at glance” view of the cluster and its resources by running:

```bash
clusterctl describe cluster capd-rke2-test -n example
```
Kubernetes control plane is running at https://172.18.0.5:6443
CoreDNS is running at https://172.18.0.5:6443/api/v1/namespaces/kube-system/services/rke2-coredns-rke2-coredns:udp-53/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
This should output similar to this:

```
NAME READY SEVERITY REASON SINCE MESSAGE
Cluster/capd-rke2-test True 2m56s
├─ClusterInfrastructure - DockerCluster/capd-rke2-test True 31m
├─ControlPlane - RKE2ControlPlane/capd-rke2-test-control-plane True 2m56s
│ └─3 Machines... True 28m See capd-rke2-test-control-plane-9fw9t, capd-rke2-test-control-plane-m2sdk, ...
└─Workers
└─MachineDeployment/worker-md-0 True 15m
└─2 Machines... True 25m See worker-md-0-fhxrw-crn5g, worker-md-0-fhxrw-qsk7n
```

:tada: CONGRATULATIONS ! :tada: You created your first RKE2 cluster with CAPD as an infrastructure provider.
🎉 CONGRATULATIONS! 🎉 You created your first RKE2 cluster with CAPD as an infrastructure provider.

## Using ClusterClass for cluster creation

Expand Down
29 changes: 22 additions & 7 deletions samples/docker/online-default/rke2-sample.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -43,15 +43,24 @@ metadata:
namespace: ${CABPR_NAMESPACE}
spec:
replicas: ${CABPR_CP_REPLICAS}
version: ${KUBERNETES_VERSION}+rke2r1
registrationMethod: control-plane-endpoint
rolloutStrategy:
type: "RollingUpdate"
rollingUpdate:
maxSurge: 1
agentConfig:
version: ${KUBERNETES_VERSION}+rke2r1
nodeAnnotations:
furkatgofurov7 marked this conversation as resolved.
Show resolved Hide resolved
test: "true"
serverConfig:
cni: calico
disableComponents:
kubernetesComponents:
- cloudController
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerMachineTemplate
name: controlplane
nodeDrainTimeout: 2m
nodeDrainTimeout: 30s
furkatgofurov7 marked this conversation as resolved.
Show resolved Hide resolved
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerMachineTemplate
Expand All @@ -60,7 +69,9 @@ metadata:
namespace: ${CABPR_NAMESPACE}
spec:
template:
spec: {}
spec:
customImage: kindest/node:${KIND_IMAGE_VERSION}
bootstrapTimeout: 15m
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
Expand Down Expand Up @@ -96,7 +107,9 @@ metadata:
namespace: ${CABPR_NAMESPACE}
spec:
template:
spec: {}
spec:
customImage: kindest/node:${KIND_IMAGE_VERSION}
bootstrapTimeout: 15m
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: RKE2ConfigTemplate
Expand All @@ -106,7 +119,9 @@ metadata:
spec:
template:
spec:
agentConfig: {}
agentConfig:
nodeAnnotations:
furkatgofurov7 marked this conversation as resolved.
Show resolved Hide resolved
test: "true"
---
apiVersion: v1
kind: ConfigMap
Expand Down Expand Up @@ -171,4 +186,4 @@ data:
http-check expect status 403
{{range $server, $address := .BackendServers}}
server {{ $server }} {{ $address }}:9345 check check-ssl verify none
{{- end}}
{{- end}}