diff --git a/.github/workflows/docs.yml b/.github/workflows/docs.yml new file mode 100644 index 00000000..bf2b400f --- /dev/null +++ b/.github/workflows/docs.yml @@ -0,0 +1,31 @@ +name: Deploy docs +on: + push: + branches: + - main + workflow_dispatch: + +jobs: + deploy: + runs-on: ubuntu-latest + permissions: + contents: write # To push a branch + pages: write # To push to a GitHub Pages site + id-token: write # To update the deployment status + steps: + - uses: actions/checkout@v4 + with: + fetch-depth: 0 + - name: Install mdbook and build book + run: | + cd ./docs/book + make build + - name: Setup Pages + uses: actions/configure-pages@v5 + - name: Upload artifact + uses: actions/upload-pages-artifact@v3 + with: + path: './docs/book/book' + - name: Deploy to GitHub Pages + id: deployment + uses: actions/deploy-pages@v4 \ No newline at end of file diff --git a/README.md b/README.md index 28080ec1..1c8aacf3 100644 --- a/README.md +++ b/README.md @@ -13,325 +13,10 @@ Cluster API Provider RKE2 is a combination of 2 provider types, a __Cluster API ------ ## Getting Started -Cluster API Provider RKE2 is compliant with the `clusterctl` contract, which means that `clusterctl` simplifies its deployment to the CAPI Management Cluster. In this Getting Started guide, we will be using the RKE2 Provider with the `docker` provider (also called `CAPD`). +Follow our [getting started guide](./docs/book/src/getting-started.md) to start creating RKE2 clusters with CAPI. -### Management Cluster - -In order to use this provider, you need to have a management cluster available to you and have your current KUBECONFIG context set to talk to that cluster. If you do not have a cluster available to you, you can create a `kind` cluster. These are the steps needed to achieve that: -1. Ensure kind is installed (https://kind.sigs.k8s.io/docs/user/quick-start/#installation) -2. Create a special `kind` configuration file if you intend to use the Docker infrastructure provider: - -```bash -cat > kind-cluster-with-extramounts.yaml <= v1.6.0 - -No additional steps are required and you can install the RKE2 provider with **clusterctl** directly: - -```bash -clusterctl init --bootstrap rke2 --control-plane rke2 --infrastructure docker -``` - -#### CAPI < v1.6.0 - -With CAPI & clusterctl versions less than v1.6.0 you need a specific configuration. To do this create a file called `clusterctl.yaml` in the `$HOME/.cluster-api` folder with the following content (substitute `${VERSION}` with a valid semver specification - e.g. v0.5.0 - from [releases](https://github.com/rancher/cluster-api-provider-rke2/releases)): - -```yaml -providers: - - name: "rke2" - url: "https://github.com/rancher/cluster-api-provider-rke2/releases/${VERSION}/bootstrap-components.yaml" - type: "BootstrapProvider" - - name: "rke2" - url: "https://github.com/rancher/cluster-api-provider-rke2/releases/${VERSION}/control-plane-components.yaml" - type: "ControlPlaneProvider" -``` -> NOTE: Due to some issue related to how `CAPD` creates Load Balancer healthchecks, it is necessary to use a fork of `CAPD` by providing in the above configuration file the following : - -```yaml - - name: "docker" - url: "https://github.com/belgaied2/cluster-api/releases/v1.3.3-cabpr-fix/infrastructure-components.yaml" - type: "InfrastructureProvider" -``` - -This configuration tells clusterctl where to look for provider manifests in order to deploy provider components in the management cluster. - -The next step is to run the `clusterctl init` command: - -```bash -clusterctl init --bootstrap rke2 --control-plane rke2 --infrastructure docker:v1.3.3-cabpr-fix -``` - -This should output something similar to the following: - -``` -Fetching providers -Installing cert-manager Version="v1.10.1" -Waiting for cert-manager to be available... -Installing Provider="cluster-api" Version="v1.3.3" TargetNamespace="capi-system" -Installing Provider="bootstrap-rke2" Version="v0.1.0-alpha.1" TargetNamespace="rke2-bootstrap-system" -Installing Provider="control-plane-rke2" Version="v0.1.0-alpha.1" TargetNamespace="rke2-control-plane-system" - -Your management cluster has been initialized successfully! - -You can now create your first workload cluster by running the following: - - clusterctl generate cluster [name] --kubernetes-version [version] | kubectl apply -f - -``` - -### Create a workload cluster - -There are some sample cluster templates available under the `samples` folder. This section assumes you are using CAPI v1.6.0 or higher. - -For this `Getting Started` section, we will be using the `docker` samples available under `samples/docker/oneline-default` folder. This folder contains a YAML template file called `rke2-sample.yaml` which contains environment variable placeholders which can be substituted using the [envsubst](https://github.com/a8m/envsubst/releases) tool. We will use `clusterctl` to generate the manifests from these template files. -Set the following environment variables: -- CABPR_NAMESPACE -- CLUSTER_NAME -- CABPR_CP_REPLICAS -- CABPR_WK_REPLICAS -- KUBERNETES_VERSION - -for example: - -```bash -export CABPR_NAMESPACE=example -export CLUSTER_NAME=capd-rke2-test -export CABPR_CP_REPLICAS=3 -export CABPR_WK_REPLICAS=2 -export KUBERNETES_VERSION=v1.24.6 -``` - -The next step is to substitue the values in the YAML using the following commands: - -```bash -cat rke2-sample.yaml | clusterctl generate yaml > rke2-docker-example.yaml -``` - -At this moment, you can take some time to study the resulting YAML, then you can apply it to the management cluster: - -```bash -kubectl apply -f rke2-docker-example.yaml -``` -and see the following output: -``` -namespace/example created -cluster.cluster.x-k8s.io/rke2-test created -dockercluster.infrastructure.cluster.x-k8s.io/rke2-test created -rke2controlplane.controlplane.cluster.x-k8s.io/rke2-test-control-plane created -dockermachinetemplate.infrastructure.cluster.x-k8s.io/controlplane created -machinedeployment.cluster.x-k8s.io/worker-md-0 created -dockermachinetemplate.infrastructure.cluster.x-k8s.io/worker created -rke2configtemplate.bootstrap.cluster.x-k8s.io/rke2-test-agent created -``` - -### Checking the workload cluster - -After waiting several minutes, you can check the state of CAPI machines, by running the following command: - -```bash -kubectl get machine -n example -``` - -and you should see output similar to the following: -``` -NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION -capd-rke2-test-control-plane-4njnd capd-rke2-test capd-rke2-test-control-plane-4njnd docker:////capd-rke2-test-control-plane-4njnd Running 5m18s v1.24.6 -capd-rke2-test-control-plane-rccsk capd-rke2-test capd-rke2-test-control-plane-rccsk docker:////capd-rke2-test-control-plane-rccsk Running 3m1s v1.24.6 -capd-rke2-test-control-plane-v5g8v capd-rke2-test capd-rke2-test-control-plane-v5g8v docker:////capd-rke2-test-control-plane-v5g8v Running 8m4s v1.24.6 -worker-md-0-6d4944f5b6-k5xxw capd-rke2-test capd-rke2-test-worker-md-0-6d4944f5b6-k5xxw docker:////capd-rke2-test-worker-md-0-6d4944f5b6-k5xxw Running 8m6s v1.24.6 -worker-md-0-6d4944f5b6-qjbjh capd-rke2-test capd-rke2-test-worker-md-0-6d4944f5b6-qjbjh docker:////capd-rke2-test-worker-md-0-6d4944f5b6-qjbjh Running 8m6s v1.24.6 -``` - -You can now get the kubeconfig file for the workload cluster using : - -```bash -clusterctl get kubeconfig capd-rke2-test -n example > ~/capd-rke2-test-kubeconfig.yaml -export KUBECONFIG=~/capd-rke2-test-kubeconfig.yaml -``` - -and query the newly created cluster using: - -```bash -kubectl cluster-info -``` - -and see output like this: - -``` -Kubernetes control plane is running at https://172.18.0.5:6443 -CoreDNS is running at https://172.18.0.5:6443/api/v1/namespaces/kube-system/services/rke2-coredns-rke2-coredns:udp-53/proxy - -To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. -``` - -:tada: CONGRATULATIONS ! :tada: You created your first RKE2 cluster with CAPD as an infrastructure provider. - -### Using ClusterClass for cluster creation - -This provider supports using [ClusterClass](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/proposals/20210526-cluster-class-and-managed-topologies.md), a Cluster API feature that implements an extra level of abstraction on top of the existing Cluster API functionality. The `ClusterClass` object is used to define a collection of template resources (control plane and machine deployment) which are used to generate one or more clusters of the same flavor. - -If you are interested in leveraging this functionality, you can refer to the examples [here](./samples/docker/clusterclass/): -- [clusterclass-quick-start.yaml](./samples/docker/clusterclass/clusterclass-quick-start.yaml): creates a sample `ClusterClass` and necessary resources. -- [rke2-sample.yaml](./samples/docker/clusterclass/rke2-sample.yaml): creates a workload cluster using the `ClusterClass`. - -As with other sample templates, you will need to set a number environment variables: -- CLUSTER_NAME -- CABPR_CP_REPLICAS -- CABPR_WK_REPLICAS -- KUBERNETES_VERSION -- KIND_IP - -for example: - -```bash -export CLUSTER_NAME=capd-rke2-clusterclass -export CABPR_CP_REPLICAS=3 -export CABPR_WK_REPLICAS=2 -export KUBERNETES_VERSION=v1.25.11 -export KIND_IP=192.168.20.20 -``` - -**Remember that, since we are using Kind, the value of `KIND_IP` must be an IP address in the range of the `kind` network.** -You can check the range Docker assigns to this network by inspecting it: - -```bash -docker network inspect kind -``` - -The next step is to substitue the values in the YAML using the following commands: - -```bash -cat clusterclass-quick-start.yaml | clusterctl generate yaml > clusterclass-example.yaml -``` - -At this moment, you can take some time to study the resulting YAML, then you can apply it to the management cluster: - -```bash -kubectl apply -f clusterclass-example.yaml -``` - -This will create a new `ClusterClass` template that can be used to provision one or multiple workload clusters of the same flavor. -To do so, you can follow the same procedure and substitute the values in the YAML for the cluster definition: - -```bash -cat rke2-sample.yaml | clusterctl generate yaml > rke2-clusterclass-example.yaml -``` - -And then apply the resulting YAML file to create a cluster from the existing `ClusterClass`. -```bash -kubectl apply -f rke2-clusterclass-example.yaml -``` - -## Testing the DEV main branch -These instructions are for development purposes initially and will be changed in the future for user facing instructions. - -1. Clone the [Cluster API Repo](https://github.com/kubernetes-sigs/cluster-api) into the **GOPATH** - -> **Why clone into the GOPATH?** There have been historic issues with code generation tools when they are run outside the go path - -2. Fork the [Cluster API Provider RKE2](https://github.com/rancher/cluster-api-provider-rke2) repo -3. Clone your new repo into the **GOPATH** (i.e. `~/go/src/github.com/yourname/cluster-api-provider-rke2`) -4. Ensure **Tilt** and **kind** are installed -5. Create a `tilt-settings.json` file in the root of your forked/cloned `cluster-api` directory. -6. Add the following contents to the file (replace "yourname" with your github account name): - -```json -{ - "default_registry": "ghcr.io/yourname", - "provider_repos": ["../../github.com/yourname/cluster-api-provider-rke2"], - "enable_providers": ["docker", "rke2-bootstrap", "rke2-control-plane"], - "kustomize_substitutions": { - "EXP_MACHINE_POOL": "true", - "EXP_CLUSTER_RESOURCE_SET": "true" - }, - "extra_args": { - "rke2-bootstrap": ["--v=4"], - "rke2-control-plane": ["--v=4"], - "core": ["--v=4"] - }, - "debug": { - "rke2-bootstrap": { - "continue": true, - "port": 30001 - }, - "rke2-control-plane": { - "continue": true, - "port": 30002 - } - } -} -``` - -> NOTE: Until this [bug](https://github.com/kubernetes-sigs/cluster-api/pull/7482) merged in CAPI you will have to make the changes locally to your clone of CAPI. - -7. Open another terminal (or pane) and go to the `cluster-api` directory. -8. Run the following to create a configuration for kind: - -```bash -cat > kind-cluster-with-extramounts.yaml < NOTE: if you are using Docker Desktop v4.13 or above then you will you will encounter issues from here. Until a permanent solution is found its recommended you use v4.12 - -9. Run the following command to create a local kind cluster: - -```bash -kind create cluster --config kind-cluster-with-extramounts.yaml -``` - -10. Now start tilt by running the following: - -```bash -tilt up -``` - -11. Press the **space** key to see the Tilt web ui and check that everything goes green. - -## Known Issues - -### When using CAPD < v1.6.0 unmodified, Cluster creation is stuck after first node and API is not reachable - -If you use `docker` as your infrastructure provider without any modification, Cluster creation will stall after provisioning the first node, and the API will not be available using the LB address. This is caused by Load Balancer configuration used in CAPD which is not compatible with RKE2. Therefore, it is necessary to use our own fork of `v1.3.3` by using a specific clusterctl configuration. +## Developer Guide +Check our [developer guide](./docs/book/src/developers/development.md) for instructions on how to setup your dev environment in order to contribute to this project. ## Get in contact You can get in contact with us via the [#capbr](https://rancher-users.slack.com/archives/C046X0CDKCH) channel on the [Rancher Users Slack](https://slack.rancher.io/). diff --git a/docs/book/.gitignore b/docs/book/.gitignore new file mode 100644 index 00000000..7585238e --- /dev/null +++ b/docs/book/.gitignore @@ -0,0 +1 @@ +book diff --git a/docs/book/Makefile b/docs/book/Makefile new file mode 100644 index 00000000..ba8405c8 --- /dev/null +++ b/docs/book/Makefile @@ -0,0 +1,27 @@ +MDBOOK_VERSION := v0.4.40 +TOOLS_DIR := $(realpath ../../hack/tools) +BIN_DIR := bin +TOOLS_BIN_DIR := $(TOOLS_DIR)/$(BIN_DIR) +MDBOOK_INSTALL := $(realpath ../../scripts/install-mdbook.sh) +EMBED := $(TOOLS_BIN_DIR)/mdbook-embed +MDBOOK := $(TOOLS_BIN_DIR)/mdbook + +$(TOOLS_BIN_DIR)/%: + make -C $(TOOLS_DIR) $(subst $(TOOLS_DIR)/,,$@) + +$(MDBOOK): + $(MDBOOK_INSTALL) $(MDBOOK_VERSION) $(TOOLS_BIN_DIR) + +BOOK_DEPS := $(MDBOOK) $(EMBED) + +.PHONY: serve +serve: $(BOOK_DEPS) ## Run a local web server with the compiled book + $(MDBOOK) serve + +.PHONY: build +build: $(BOOK_DEPS) ## Build the book + $(MDBOOK) build + +.PHONY: clean +clean: + rm -rf book \ No newline at end of file diff --git a/docs/book/README.md b/docs/book/README.md new file mode 100644 index 00000000..ba1b3d88 --- /dev/null +++ b/docs/book/README.md @@ -0,0 +1,23 @@ +# Preview book changes locally + +It is easy to preview your local changes to the book before submitting a PR: + +1. Build the local copy of the book from the `docs/book` path: + + ```shell + make build + ``` + +1. To preview the book contents run: + + ```shell + make serve + ``` + +This should serve the book at [localhost:3000](http://localhost:3000/). You can keep running `make serve` and continue making doc changes. mdBook will detect your changes, render them and refresh your browser page automatically. + +1. Clean mdBook auto-generated content from `docs/book/book` path once you have finished local preview: + + ```shell + make clean + ``` \ No newline at end of file diff --git a/docs/book/book.toml b/docs/book/book.toml new file mode 100644 index 00000000..b9092fcd --- /dev/null +++ b/docs/book/book.toml @@ -0,0 +1,9 @@ +[book] +authors = ["The Cluster API Provider RKE2 Maintainers"] +language = "en" +multilingual = false +src = "src" +title = "Kubernetes Cluster API Provider RKE2" + +[preprocessor.embed] +command = "./util-embed.sh" \ No newline at end of file diff --git a/docs/book/src/00_introduction.md b/docs/book/src/00_introduction.md new file mode 100644 index 00000000..45deae38 --- /dev/null +++ b/docs/book/src/00_introduction.md @@ -0,0 +1,22 @@ +# Cluster API Provider RKE2 + +![GitHub](https://img.shields.io/github/license/rancher/cluster-api-provider-rke2) + +------ + +## What is Cluster API Provider RKE2 + +The [Cluster API](https://cluster-api.sigs.k8s.io/) brings declarative, Kubernetes-style APIs to cluster creation, configuration and management. + +Cluster API Provider RKE2 (CAPRKE2) is a combination of 2 provider types, a __Cluster API Control Plane Provider__ for provisioning Kubernetes control plane nodes and a __Cluster API Bootstrap Provider__ for bootstrapping Kubernetes on a machine where [RKE2](https://docs.rke2.io/) is used as the Kubernetes distro. + +------ + +## Getting Started +Follow our [getting started guide](./01_user/01_getting-started.md) to start creating RKE2 clusters with CAPI. + +## Developer Guide +Check our [developer guide](./03_developer/01_development.md) for instructions on how to setup your dev environment in order to contribute to this project. + +## Get in contact +You can get in contact with us via the [#capbr](https://rancher-users.slack.com/archives/C046X0CDKCH) channel on the [Rancher Users Slack](https://slack.rancher.io/). diff --git a/docs/book/src/01_user/00.md b/docs/book/src/01_user/00.md new file mode 100644 index 00000000..ad307dec --- /dev/null +++ b/docs/book/src/01_user/00.md @@ -0,0 +1,3 @@ +# User guide + +This section contains a getting started guide to help new users utilise CAPRKE2. \ No newline at end of file diff --git a/docs/book/src/01_user/01_getting-started.md b/docs/book/src/01_user/01_getting-started.md new file mode 100644 index 00000000..1ddba2d7 --- /dev/null +++ b/docs/book/src/01_user/01_getting-started.md @@ -0,0 +1,252 @@ +# Getting Started + +Cluster API Provider RKE2 is compliant with the `clusterctl` contract, which means that `clusterctl` simplifies its deployment to the CAPI Management Cluster. In this Getting Started guide, we will be using the RKE2 Provider with the `docker` provider (also called `CAPD`). + +## Prerequisites +- [clusterctl](https://cluster-api.sigs.k8s.io/user/quick-start#install-clusterctl) to handle the lifecycle of a Cluster API management cluster +- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) to apply the workload cluster manifests that `clusterctl` generates +- [kind](https://kind.sigs.k8s.io/) and [docker](https://www.docker.com/) to create a local Cluster API management cluster + +## Management Cluster + +In order to use this provider, you need to have a management cluster available to you and have your current KUBECONFIG context set to talk to that cluster. If you do not have a cluster available to you, you can create a `kind` cluster. These are the steps needed to achieve that: +1. Ensure kind is installed (https://kind.sigs.k8s.io/docs/user/quick-start/#installation) +2. Create a special `kind` configuration file if you intend to use the Docker infrastructure provider: + +```bash +cat > kind-cluster-with-extramounts.yaml <= v1.6.0 + +No additional steps are required and you can install the RKE2 provider with **clusterctl** directly: + +```bash +clusterctl init --bootstrap rke2 --control-plane rke2 --infrastructure docker +``` + +### CAPI < v1.6.0 + +With CAPI & clusterctl versions less than v1.6.0 you need a specific configuration. To do this create a file called `clusterctl.yaml` in the `$HOME/.cluster-api` folder with the following content (substitute `${VERSION}` with a valid semver specification - e.g. v0.5.0 - from [releases](https://github.com/rancher/cluster-api-provider-rke2/releases)): + +```yaml +providers: + - name: "rke2" + url: "https://github.com/rancher/cluster-api-provider-rke2/releases/${VERSION}/bootstrap-components.yaml" + type: "BootstrapProvider" + - name: "rke2" + url: "https://github.com/rancher/cluster-api-provider-rke2/releases/${VERSION}/control-plane-components.yaml" + type: "ControlPlaneProvider" +``` +> NOTE: Due to some issue related to how `CAPD` creates Load Balancer healthchecks, it is necessary to use a fork of `CAPD` by providing in the above configuration file the following : + +```yaml + - name: "docker" + url: "https://github.com/belgaied2/cluster-api/releases/v1.3.3-cabpr-fix/infrastructure-components.yaml" + type: "InfrastructureProvider" +``` + +This configuration tells clusterctl where to look for provider manifests in order to deploy provider components in the management cluster. + +The next step is to run the `clusterctl init` command: + +```bash +clusterctl init --bootstrap rke2 --control-plane rke2 --infrastructure docker:v1.3.3-cabpr-fix +``` + +This should output something similar to the following: + +``` +Fetching providers +Installing cert-manager Version="v1.10.1" +Waiting for cert-manager to be available... +Installing Provider="cluster-api" Version="v1.3.3" TargetNamespace="capi-system" +Installing Provider="bootstrap-rke2" Version="v0.1.0-alpha.1" TargetNamespace="rke2-bootstrap-system" +Installing Provider="control-plane-rke2" Version="v0.1.0-alpha.1" TargetNamespace="rke2-control-plane-system" + +Your management cluster has been initialized successfully! + +You can now create your first workload cluster by running the following: + + clusterctl generate cluster [name] --kubernetes-version [version] | kubectl apply -f - +``` + +## Create a workload cluster + +There are some sample cluster templates available under the `samples` folder. This section assumes you are using CAPI v1.6.0 or higher. + +For this `Getting Started` section, we will be using the `docker` samples available under `samples/docker/oneline-default` folder. This folder contains a YAML template file called `rke2-sample.yaml` which contains environment variable placeholders which can be substituted using the [envsubst](https://github.com/a8m/envsubst/releases) tool. We will use `clusterctl` to generate the manifests from these template files. +Set the following environment variables: +- CABPR_NAMESPACE +- CLUSTER_NAME +- CABPR_CP_REPLICAS +- CABPR_WK_REPLICAS +- KUBERNETES_VERSION + +for example: + +```bash +export CABPR_NAMESPACE=example +export CLUSTER_NAME=capd-rke2-test +export CABPR_CP_REPLICAS=3 +export CABPR_WK_REPLICAS=2 +export KUBERNETES_VERSION=v1.24.6 +``` + +The next step is to substitue the values in the YAML using the following commands: + +```bash +cat rke2-sample.yaml | clusterctl generate yaml > rke2-docker-example.yaml +``` + +At this moment, you can take some time to study the resulting YAML, then you can apply it to the management cluster: + +```bash +kubectl apply -f rke2-docker-example.yaml +``` +and see the following output: +``` +namespace/example created +cluster.cluster.x-k8s.io/rke2-test created +dockercluster.infrastructure.cluster.x-k8s.io/rke2-test created +rke2controlplane.controlplane.cluster.x-k8s.io/rke2-test-control-plane created +dockermachinetemplate.infrastructure.cluster.x-k8s.io/controlplane created +machinedeployment.cluster.x-k8s.io/worker-md-0 created +dockermachinetemplate.infrastructure.cluster.x-k8s.io/worker created +rke2configtemplate.bootstrap.cluster.x-k8s.io/rke2-test-agent created +``` + +## Checking the workload cluster + +After waiting several minutes, you can check the state of CAPI machines, by running the following command: + +```bash +kubectl get machine -n example +``` + +and you should see output similar to the following: +``` +NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION +capd-rke2-test-control-plane-4njnd capd-rke2-test capd-rke2-test-control-plane-4njnd docker:////capd-rke2-test-control-plane-4njnd Running 5m18s v1.24.6 +capd-rke2-test-control-plane-rccsk capd-rke2-test capd-rke2-test-control-plane-rccsk docker:////capd-rke2-test-control-plane-rccsk Running 3m1s v1.24.6 +capd-rke2-test-control-plane-v5g8v capd-rke2-test capd-rke2-test-control-plane-v5g8v docker:////capd-rke2-test-control-plane-v5g8v Running 8m4s v1.24.6 +worker-md-0-6d4944f5b6-k5xxw capd-rke2-test capd-rke2-test-worker-md-0-6d4944f5b6-k5xxw docker:////capd-rke2-test-worker-md-0-6d4944f5b6-k5xxw Running 8m6s v1.24.6 +worker-md-0-6d4944f5b6-qjbjh capd-rke2-test capd-rke2-test-worker-md-0-6d4944f5b6-qjbjh docker:////capd-rke2-test-worker-md-0-6d4944f5b6-qjbjh Running 8m6s v1.24.6 +``` + +You can now get the kubeconfig file for the workload cluster using : + +```bash +clusterctl get kubeconfig capd-rke2-test -n example > ~/capd-rke2-test-kubeconfig.yaml +export KUBECONFIG=~/capd-rke2-test-kubeconfig.yaml +``` + +and query the newly created cluster using: + +```bash +kubectl cluster-info +``` + +and see output like this: + +``` +Kubernetes control plane is running at https://172.18.0.5:6443 +CoreDNS is running at https://172.18.0.5:6443/api/v1/namespaces/kube-system/services/rke2-coredns-rke2-coredns:udp-53/proxy + +To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. +``` + +:tada: CONGRATULATIONS ! :tada: You created your first RKE2 cluster with CAPD as an infrastructure provider. + +## Using ClusterClass for cluster creation + +This provider supports using [ClusterClass](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/proposals/20210526-cluster-class-and-managed-topologies.md), a Cluster API feature that implements an extra level of abstraction on top of the existing Cluster API functionality. The `ClusterClass` object is used to define a collection of template resources (control plane and machine deployment) which are used to generate one or more clusters of the same flavor. + +If you are interested in leveraging this functionality, you can refer to the examples [here](./samples/docker/clusterclass/): +- [clusterclass-quick-start.yaml](./samples/docker/clusterclass/clusterclass-quick-start.yaml): creates a sample `ClusterClass` and necessary resources. +- [rke2-sample.yaml](./samples/docker/clusterclass/rke2-sample.yaml): creates a workload cluster using the `ClusterClass`. + +As with other sample templates, you will need to set a number environment variables: +- CLUSTER_NAME +- CABPR_CP_REPLICAS +- CABPR_WK_REPLICAS +- KUBERNETES_VERSION +- KIND_IP + +for example: + +```bash +export CLUSTER_NAME=capd-rke2-clusterclass +export CABPR_CP_REPLICAS=3 +export CABPR_WK_REPLICAS=2 +export KUBERNETES_VERSION=v1.25.11 +export KIND_IP=192.168.20.20 +``` + +**Remember that, since we are using Kind, the value of `KIND_IP` must be an IP address in the range of the `kind` network.** +You can check the range Docker assigns to this network by inspecting it: + +```bash +docker network inspect kind +``` + +The next step is to substitue the values in the YAML using the following commands: + +```bash +cat clusterclass-quick-start.yaml | clusterctl generate yaml > clusterclass-example.yaml +``` + +At this moment, you can take some time to study the resulting YAML, then you can apply it to the management cluster: + +```bash +kubectl apply -f clusterclass-example.yaml +``` + +This will create a new `ClusterClass` template that can be used to provision one or multiple workload clusters of the same flavor. +To do so, you can follow the same procedure and substitute the values in the YAML for the cluster definition: + +```bash +cat rke2-sample.yaml | clusterctl generate yaml > rke2-clusterclass-example.yaml +``` + +And then apply the resulting YAML file to create a cluster from the existing `ClusterClass`. +```bash +kubectl apply -f rke2-clusterclass-example.yaml +``` + +## Known Issues + +### When using CAPD < v1.6.0 unmodified, Cluster creation is stuck after first node and API is not reachable + +If you use `docker` as your infrastructure provider without any modification, Cluster creation will stall after provisioning the first node, and the API will not be available using the LB address. This is caused by Load Balancer configuration used in CAPD which is not compatible with RKE2. Therefore, it is necessary to use our own fork of `v1.3.3` by using a specific clusterctl configuration. \ No newline at end of file diff --git a/docs/book/src/02_topics/00.md b/docs/book/src/02_topics/00.md new file mode 100644 index 00000000..504c215f --- /dev/null +++ b/docs/book/src/02_topics/00.md @@ -0,0 +1,3 @@ +# Topics + +This section contains more detailed information about the features that CAPRKE2 offers and how to use them. \ No newline at end of file diff --git a/docs/AIR-GAPPED-INSTALL.md b/docs/book/src/02_topics/01_air-gapped-installation.md similarity index 95% rename from docs/AIR-GAPPED-INSTALL.md rename to docs/book/src/02_topics/01_air-gapped-installation.md index c78db3f5..ca05eb26 100644 --- a/docs/AIR-GAPPED-INSTALL.md +++ b/docs/book/src/02_topics/01_air-gapped-installation.md @@ -53,4 +53,4 @@ Considering the above tradeoffs, base images used for Air-Gapped need to comply In order to deploy RKE2 Clusters in Air-Gapped mode using CABPR, you need to set the fields `spec.agentConfig.airGapped` for the RKE2ControlPlane object and `spec.template.spec.agentConfig.airGapped` for RKE2ConfigTemplate object to `true`. -You can check a reference implementation for CAPD [here](/samples/docker/air-gapped/) including configuration for CAPD custom image. \ No newline at end of file +You can check a reference implementation for CAPD [here](https://github.com/rancher/cluster-api-provider-rke2/tree/main/samples/docker/air-gapped) including configuration for CAPD custom image. \ No newline at end of file diff --git a/docs/registration-methods.md b/docs/book/src/02_topics/02_node-registration-methods.md similarity index 100% rename from docs/registration-methods.md rename to docs/book/src/02_topics/02_node-registration-methods.md diff --git a/docs/book/src/03_developer/00.md b/docs/book/src/03_developer/00.md new file mode 100644 index 00000000..f9b36313 --- /dev/null +++ b/docs/book/src/03_developer/00.md @@ -0,0 +1,5 @@ +# Developer Guide + +This section describes the workflow for regular developer tasks, such as: +- Development guide +- Releasing a new version of CAPRKE2 diff --git a/docs/book/src/03_developer/01_development.md b/docs/book/src/03_developer/01_development.md new file mode 100644 index 00000000..27b5c55f --- /dev/null +++ b/docs/book/src/03_developer/01_development.md @@ -0,0 +1,74 @@ +# Development + +The following instructions are for development purposes. + +1. Clone the [Cluster API Repo](https://github.com/kubernetes-sigs/cluster-api) into the **GOPATH** + +> **Why clone into the GOPATH?** There have been historic issues with code generation tools when they are run outside the go path + +2. Fork the [Cluster API Provider RKE2](https://github.com/rancher/cluster-api-provider-rke2) repo +3. Clone your new repo into the **GOPATH** (i.e. `~/go/src/github.com/yourname/cluster-api-provider-rke2`) +4. Ensure **Tilt** and **kind** are installed +5. Create a `tilt-settings.json` file in the root of your forked/cloned `cluster-api` directory. +6. Add the following contents to the file (replace "yourname" with your github account name): + +```json +{ + "default_registry": "ghcr.io/yourname", + "provider_repos": ["../../github.com/yourname/cluster-api-provider-rke2"], + "enable_providers": ["docker", "rke2-bootstrap", "rke2-control-plane"], + "kustomize_substitutions": { + "EXP_MACHINE_POOL": "true", + "EXP_CLUSTER_RESOURCE_SET": "true" + }, + "extra_args": { + "rke2-bootstrap": ["--v=4"], + "rke2-control-plane": ["--v=4"], + "core": ["--v=4"] + }, + "debug": { + "rke2-bootstrap": { + "continue": true, + "port": 30001 + }, + "rke2-control-plane": { + "continue": true, + "port": 30002 + } + } +} +``` + +> NOTE: Until this [bug](https://github.com/kubernetes-sigs/cluster-api/pull/7482) merged in CAPI you will have to make the changes locally to your clone of CAPI. + +7. Open another terminal (or pane) and go to the `cluster-api` directory. +8. Run the following to create a configuration for kind: + +```bash +cat > kind-cluster-with-extramounts.yaml < NOTE: if you are using Docker Desktop v4.13 or above then you will you will encounter issues from here. Until a permanent solution is found its recommended you use v4.12 + +9. Run the following command to create a local kind cluster: + +```bash +kind create cluster --config kind-cluster-with-extramounts.yaml +``` + +10. Now start tilt by running the following: + +```bash +tilt up +``` + +11. Press the **space** key to see the Tilt web ui and check that everything goes green. \ No newline at end of file diff --git a/docs/book/src/03_developer/02_releasing.md b/docs/book/src/03_developer/02_releasing.md new file mode 100644 index 00000000..ddceb534 --- /dev/null +++ b/docs/book/src/03_developer/02_releasing.md @@ -0,0 +1 @@ +{{#include ../../../../docs/release.md}} \ No newline at end of file diff --git a/docs/book/src/04_reference/00.md b/docs/book/src/04_reference/00.md new file mode 100644 index 00000000..66f9c958 --- /dev/null +++ b/docs/book/src/04_reference/00.md @@ -0,0 +1,3 @@ +# Reference + +This section contains reference documentation for CAPRKE2 API types. \ No newline at end of file diff --git a/docs/book/src/SUMMARY.md b/docs/book/src/SUMMARY.md new file mode 100644 index 00000000..37ff51a3 --- /dev/null +++ b/docs/book/src/SUMMARY.md @@ -0,0 +1,12 @@ +# Summary + +- [Introduction](./00_introduction.md) +- [User Guide](./01_user/00.md) + - [Getting Started](./01_user/01_getting-started.md) +- [Topics](./02_topics/00.md) + - [Air-gapped installation](./02_topics/01_air-gapped-installation.md) + - [Node registration methods](./02_topics/02_node-registration-methods.md) +- [Developer Guide](./03_developer/00.md) + - [Development](./03_developer/01_development.md) + - [Releasing](./03_developer/02_releasing.md) +- [Reference](./04_reference/00.md) \ No newline at end of file diff --git a/docs/book/util-embed.sh b/docs/book/util-embed.sh new file mode 100644 index 00000000..d786bc4b --- /dev/null +++ b/docs/book/util-embed.sh @@ -0,0 +1,24 @@ +#!/bin/bash + +# Copyright 2019 The Kubernetes Authors. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +set -o errexit +set -o nounset +set -o pipefail + +REPO_ROOT=$(git rev-parse --show-toplevel) +EMBED=${REPO_ROOT}/hack/tools/bin/mdbook-embed +make "${EMBED}" GOPROXY="${GOPROXY:-"https://proxy.golang.org"}" &>/dev/null +${EMBED} "$@" \ No newline at end of file diff --git a/hack/tools/Makefile b/hack/tools/Makefile new file mode 100644 index 00000000..1dc633d0 --- /dev/null +++ b/hack/tools/Makefile @@ -0,0 +1,7 @@ +# Directories. +BIN_DIR := bin + +MDBOOK_EMBED := $(BIN_DIR)/mdbook-embed +$(MDBOOK_EMBED): $(BIN_DIR) go.mod go.sum + go build -tags=tools -o $(BIN_DIR)/mdbook-embed sigs.k8s.io/cluster-api/hack/tools/mdbook/embed + diff --git a/hack/tools/go.mod b/hack/tools/go.mod new file mode 100644 index 00000000..ba36803a --- /dev/null +++ b/hack/tools/go.mod @@ -0,0 +1,7 @@ +module github.com/rancher/cluster-api-provider-rke2/hack/tools + +go 1.22.6 + +require sigs.k8s.io/cluster-api/hack/tools v0.0.0-20240820112706-3abe3058a6a8 + +require sigs.k8s.io/kubebuilder/docs/book/utils v0.0.0-20211028165026-57688c578b5d // indirect diff --git a/hack/tools/go.sum b/hack/tools/go.sum new file mode 100644 index 00000000..b303e55f --- /dev/null +++ b/hack/tools/go.sum @@ -0,0 +1,4 @@ +sigs.k8s.io/cluster-api/hack/tools v0.0.0-20240820112706-3abe3058a6a8 h1:L+YRo/LbSJ+Rc6vMIz007sEkAvY5QW1kptl8StmIxNc= +sigs.k8s.io/cluster-api/hack/tools v0.0.0-20240820112706-3abe3058a6a8/go.mod h1:xZhAF40RPQKqNbmpLg81sxibOAz/Y17bgmdJxZszN7w= +sigs.k8s.io/kubebuilder/docs/book/utils v0.0.0-20211028165026-57688c578b5d h1:KLiQzLW3RZJR19+j4pw2h5iioyAyqCkDBEAFdnGa3N8= +sigs.k8s.io/kubebuilder/docs/book/utils v0.0.0-20211028165026-57688c578b5d/go.mod h1:NRdZafr4zSCseLQggdvIMXa7umxf+Q+PJzrj3wFwiGE= diff --git a/hack/tools/tools.go b/hack/tools/tools.go new file mode 100644 index 00000000..a9822bd6 --- /dev/null +++ b/hack/tools/tools.go @@ -0,0 +1,22 @@ +//go:build tools +// +build tools + +/* +Copyright 2022 The Kubernetes Authors. +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + http://www.apache.org/licenses/LICENSE-2.0 +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// This package imports things required by build scripts, to force `go mod` to see them as dependencies +package tools + +import ( + _ "sigs.k8s.io/cluster-api/hack/tools/mdbook/embed" +) diff --git a/scripts/install-mdbook.sh b/scripts/install-mdbook.sh new file mode 100755 index 00000000..5720e1c6 --- /dev/null +++ b/scripts/install-mdbook.sh @@ -0,0 +1,37 @@ +#!/bin/bash + +# Copyright 2022 The Kubernetes Authors. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +set -o errexit +set -o nounset +set -o pipefail + +VERSION=${1} +OUTPUT_PATH=${2} + +# Ensure the output folder exists +mkdir -p "${OUTPUT_PATH}" + +# Get what release to download +RELEASE_NAME="" +case "$OSTYPE" in + darwin*) RELEASE_NAME="x86_64-apple-darwin.tar.gz" ;; + linux*) RELEASE_NAME="x86_64-unknown-linux-gnu.tar.gz" ;; +# msys*) echo "WINDOWS" ;; + *) echo "No mdBook release available for: $OSTYPE" && exit 1;; +esac + +# Download and extract the mdBook release +curl -L "https://github.com/rust-lang/mdBook/releases/download/${VERSION}/mdbook-${VERSION}-${RELEASE_NAME}" | tar -xvz -C "${OUTPUT_PATH}"