Skip to content

Commit

Permalink
Merge pull request #464 from akutz/bugfix/portable-yaml-gen
Browse files Browse the repository at this point in the history
Fixes YAML generation issues with Docker
  • Loading branch information
k8s-ci-robot authored Jul 25, 2019
2 parents 5c29835 + 1c73f74 commit f721639
Show file tree
Hide file tree
Showing 3 changed files with 53 additions and 49 deletions.
64 changes: 27 additions & 37 deletions docs/getting_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,30 +90,25 @@ export CLUSTER_CIDR='100.96.0.0/11' # (optional) The cluster CIDR of the m
EOF
```

With the above environment variable file it is now possible to generate the manifests needed to bootstrap the management cluster. The following command uses Docker to run an image that has all of the necessary templates and tools to generate the YAML manifests. Please note that the example mounts the current directory as the location where the YAML will be generated. Additionally, the `envvars.txt` file created above is mounted inside the the image in order to provide the generation routine with its default values:
With the above environment variable file it is now possible to generate the manifests needed to bootstrap the management cluster. The following command uses Docker to run an image that has all of the necessary templates and tools to generate the YAML manifests. Additionally, the `envvars.txt` file created above is mounted inside the the image in order to provide the generation routine with its default values:

```shell
# create the output directory for the management cluster manifests,
# only required for Linux to work around permissions issues on volume mounts
$ mkdir -p management-cluster

$ docker run --rm \
--user "$(id -u):$(id -g)" \
-v "$(pwd)/management-cluster":/out \
-v "$(pwd)/envvars.txt":/out/envvars.txt:ro \
-v "$(pwd)":/out \
-v "$(pwd)/envvars.txt":/envvars.txt:ro \
gcr.io/cluster-api-provider-vsphere/release/manifests:latest \
-c management-cluster

done generating ./out/addons.yaml
done generating ./out/management-cluster/addons.yaml
done generating ./config/default/manager_image_patch.yaml
done generating ./out/cluster.yaml
done generating ./out/machines.yaml
done generating ./out/machineset.yaml
done generating ./out/provider-components.yaml
done generating ./out/management-cluster/cluster.yaml
done generating ./out/management-cluster/machines.yaml
done generating ./out/management-cluster/machineset.yaml
done generating ./out/management-cluster/provider-components.yaml

*** Finished creating initial example yamls in ./out

The files ./out/cluster.yaml and ./out/machines.yaml need to be updated
The files ./out/management-cluster/cluster.yaml and ./out/management-cluster/machines.yaml need to be updated
with information about the desired Kubernetes cluster and vSphere environment
on which the Kubernetes cluster will be created.

Expand All @@ -128,14 +123,14 @@ Once the manifests are generated, `clusterctl` may be used to create the managem
clusterctl create cluster \
--provider vsphere \
--bootstrap-type kind \
--cluster management-cluster/cluster.yaml \
--machines management-cluster/machines.yaml \
--provider-components management-cluster/provider-components.yaml \
--addon-components management-cluster/addons.yaml \
--kubeconfig-out management-cluster/kubeconfig
--cluster ./out/management-cluster/cluster.yaml \
--machines ./out/management-cluster/machines.yaml \
--provider-components ./out/management-cluster/provider-components.yaml \
--addon-components ./out/management-cluster/addons.yaml \
--kubeconfig-out ./out/management-cluster/kubeconfig
```

Once `clusterctl` has completed successfully, the file `management-cluster/kubeconfig` may be used to access the new management cluster. This is the **admin** `kubeconfig` for the management cluster, and it may be used to spin up additional clusters with Cluster API. However, the creation of roles with limited access, is recommended before creating additional clusters.
Once `clusterctl` has completed successfully, the file `./out/management-cluster/kubeconfig` may be used to access the new management cluster. This is the **admin** `kubeconfig` for the management cluster, and it may be used to spin up additional clusters with Cluster API. However, the creation of roles with limited access, is recommended before creating additional clusters.

**NOTE**: From this point forward `clusterctl` is no longer required to provision new clusters. Workload clusters should be provisioned by applying Cluster API resources directly on the management cluster using `kubectl`.

Expand All @@ -146,21 +141,16 @@ With your management cluster bootstrapped, it's time to reap the benefits of Clu
Using the same Docker command as above, generate resources for a new cluster, this time with a different name:

```shell
# create the output directory for the workload cluster manifests,
# only required for Linux to work around permissions issues on volume mounts
$ mkdir -p workload-cluster-1

$ docker run --rm \
--user "$(id -u):$(id -g)" \
-v "$(pwd)/workload-cluster-1":/out \
-v "$(pwd)/envvars.txt":/out/envvars.txt:ro \
gcr.io/cluster-api-provider-vsphere/release/manifests:latest \
-c workload-cluster-1
-v "$(pwd)":/out \
-v "$(pwd)/envvars.txt":/envvars.txt:ro \
gcr.io/cluster-api-provider-vsphere/release/manifests:latest \
-c workload-cluster-1
```

**NOTE**: The above step is not required to manage your Cluster API resources at this point but is used to simplify this guide. You should manage your Cluster API resources in the same way you would manage your Kubernetes application manifests. Please use the generated manifests only as a reference.

The Cluster and Machine resource in `workload-cluster-1/cluster.yaml` and `workload-cluster-1/machines.yaml` defines the workload cluster with the initial control plane node:
The Cluster and Machine resource in `./out/workload-cluster-1/cluster.yaml` and `./out/workload-cluster-1/machines.yaml` defines the workload cluster with the initial control plane node:

```yaml
---
Expand Down Expand Up @@ -212,7 +202,7 @@ spec:
controlPlane: "1.13.6"
```
To add 3 additional worker nodes to your cluster, see the generated machineset file `workload-cluster-1/machineset.yaml`:
To add 3 additional worker nodes to your cluster, see the generated machineset file `./out/workload-cluster-1/machineset.yaml`:

```yaml
apiVersion: "cluster.k8s.io/v1alpha1"
Expand Down Expand Up @@ -258,27 +248,27 @@ Use `kubectl` with the `kubeconfig` for the management cluster to provision the
1. Export the management cluster's `kubeconfig` file:

```shell
export KUBECONFIG="$(pwd)/management-cluster/kubeconfig"
export KUBECONFIG="$(pwd)/out/management-cluster/kubeconfig"
```

2. Create the workload cluster by applying the cluster manifest:

```shell
$ kubectl apply -f workload-cluster-1/cluster.yaml
$ kubectl apply -f ./out/workload-cluster-1/cluster.yaml
cluster.cluster.k8s.io/workload-cluster-1 created
```

3. Create the control plane nodes for the workload cluster by applying the machines manifest:

```shell
$ kubectl apply -f workload-cluster-1/machines.yaml
$ kubectl apply -f ./out/workload-cluster-1/machines.yaml
machine.cluster.k8s.io/workload-cluster-1-controlplane-1 created
```

4. Create the worker nodes for the workload cluster by applying the machineset manifest:

```shell
$ kubectl apply -f workload-cluster-1/machineset.yaml
$ kubectl apply -f ./out/workload-cluster-1/machineset.yaml
machineset.cluster.k8s.io/workload-cluster-1-machineset-1 created
```

Expand All @@ -299,9 +289,9 @@ The `kubeconfig` file to access workload clusters should be accessible as a Kube

```shell
kubectl get secret workload-cluster-1-kubeconfig -o=jsonpath='{.data.value}' | \
{ base64 -d 2>/dev/null || base64 -D; } >workload-cluster-1/kubeconfig
{ base64 -d 2>/dev/null || base64 -D; } >./out/workload-cluster-1/kubeconfig
```

The new `workload-cluster-1/kubeconfig` file may now be used to access the workload cluster.
The new `./out/workload-cluster-1/kubeconfig` file may now be used to access the workload cluster.

**NOTE**: Workload clusters do not have any addons applied aside from those added by kubeadm. Nodes in your workload clusters will be in the `NotReady` state until you apply a CNI addon. The `addons.yaml` files generated above have a default Calico addon which you can use, otherwise apply custom addons based on your use-case.
27 changes: 18 additions & 9 deletions hack/generate-yaml.sh
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,11 @@ set -o pipefail

# Change directories to the parent directory of the one in which this
# script is located.
cd "$(dirname "${BASH_SOURCE[0]}")/.."
cd "${WORKDIR:-$(dirname "${BASH_SOURCE[0]}")/..}"
BUILDDIR="${BUILDDIR:-.}"

OUT_DIR="${OUT_DIR:-}"
TPL_DIR=./cmd/clusterctl/examples/vsphere
TPL_DIR="${BUILDDIR}"/cmd/clusterctl/examples/vsphere

OVERWRITE=
CLUSTER_NAME="${CLUSTER_NAME:-capv-mgmt-example}"
Expand Down Expand Up @@ -85,8 +86,8 @@ export MANAGER_IMAGE="${CAPV_MANAGER_IMAGE}"
mkdir -p "${OUT_DIR}"

# Load an envvars.txt file if one is found.
# shellcheck disable=SC1090
[ -e "${OUT_DIR}/envvars.txt" ] && source "${OUT_DIR}/envvars.txt"
# shellcheck disable=SC1091
[ "${DOCKER_ENABLED-}" ] && [ -e "/envvars.txt" ] && source "/envvars.txt"

# shellcheck disable=SC2034
ADDON_TPL_FILE="${TPL_DIR}"/addons.yaml.template
Expand All @@ -103,8 +104,8 @@ MACHINESET_TPL_FILE="${TPL_DIR}"/machineset.yaml.template
# shellcheck disable=SC2034
MACHINESET_OUT_FILE="${OUT_DIR}"/machineset.yaml

CAPI_CFG_DIR=./vendor/sigs.k8s.io/cluster-api/config
CAPV_CFG_DIR=./config
CAPI_CFG_DIR="${BUILDDIR}"/vendor/sigs.k8s.io/cluster-api/config
CAPV_CFG_DIR="${BUILDDIR}"/config

COMP_OUT_FILE="${OUT_DIR}"/provider-components.yaml
# shellcheck disable=SC2034
Expand Down Expand Up @@ -176,8 +177,12 @@ verify_cpu_mem_dsk VSPHERE_DISK_GIB 20
record_and_export KUBERNETES_VERSION ":-${KUBERNETES_VERSION}"

do_envsubst() {
python hack/envsubst.py >"${2}" <"${1}"
echo "done generating ${2}"
python "${BUILDDIR}/hack/envsubst.py" >"${2}" <"${1}"
if [ "${DOCKER_ENABLED-}" ]; then
echo "done generating ${2/\/build/.}"
else
echo "done generating ${2}"
fi
}

# Create the output files by substituting the templates with envrionment vars.
Expand All @@ -191,7 +196,7 @@ done
kustomize build "${CAPI_CFG_DIR}"/default/; } >"${COMP_OUT_FILE}"

cat <<EOF
Done generating ${COMP_OUT_FILE}
done generating ${COMP_OUT_FILE}
*** Finished creating initial example yamls in ${OUT_DIR}
Expand All @@ -201,3 +206,7 @@ Done generating ${COMP_OUT_FILE}
Enjoy!
EOF

# If running in Docker then ensure the contents of the OUT_DIR have the
# the same owner as the volume mounted to the /out directory.
[ "${DOCKER_ENABLED}" ] && chown -R "$(stat -c '%u:%g' /out)" "${OUT_DIR}"
11 changes: 8 additions & 3 deletions hack/tools/generate-yaml/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ FROM ${BASE_IMAGE}
LABEL "maintainer" "Andrew Kutz <[email protected]>"

# Run things out of the /build directory.
ENV BUILDDIR /build
WORKDIR /build

# Copy in the hack tooling.
Expand All @@ -40,7 +41,11 @@ RUN find . -type d -exec chmod 0777 \{\} \;
ARG CAPV_MANAGER_IMAGE=gcr.io/cluster-api-provider-vsphere/ci/manager:latest
ENV CAPV_MANAGER_IMAGE=${CAPV_MANAGER_IMAGE}

# The YAML is always written to the /out directory. Mount the volumes there.
ENV OUT_DIR /out
# Change the working directory to /.
ENV WORKDIR /out
WORKDIR /out

ENTRYPOINT [ "./hack/generate-yaml.sh" ]
# Indicate that this is being execute in a container.
ENV DOCKER_ENABLED 1

ENTRYPOINT [ "/build/hack/generate-yaml.sh" ]

0 comments on commit f721639

Please sign in to comment.