This repository hosts an implementation of a provider for AWS for the OpenShift machine-api.
This provider runs as a machine-controller deployed by the machine-api-operator
The Dockerfiles use as builder
in the FROM
instruction which is not currently supported
by the RH's docker fork (see kubernetes-sigs/kubebuilder#268).
One needs to run the imagebuilder
command instead of the docker build
.
Note: this info is RH only, it needs to be backported every time the README.md
is synced with the upstream one.
-
Install kvm
Depending on your virtualization manager you can choose a different driver. In order to install kvm, you can run (as described in the drivers documentation):
$ sudo yum install libvirt-daemon-kvm qemu-kvm libvirt-daemon-config-network $ systemctl start libvirtd $ sudo usermod -a -G libvirt $(whoami) $ newgrp libvirt
To install to kvm2 driver:
curl -Lo docker-machine-driver-kvm2 https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2 \ && chmod +x docker-machine-driver-kvm2 \ && sudo cp docker-machine-driver-kvm2 /usr/local/bin/ \ && rm docker-machine-driver-kvm2
-
Deploying the cluster
To install minikube
v1.1.0
, you can run:$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v1.1.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
To deploy the cluster:
$ minikube start --vm-driver kvm2 --kubernetes-version v1.13.1 --v 5 $ eval $(minikube docker-env)
-
Deploying machine API controllers
For development purposes the aws machine controller itself will run out of the machine API stack. Otherwise, docker images needs to be built, pushed into a docker registry and deployed within the stack.
To deploy the stack:
kustomize build config | kubectl apply -f -
-
Deploy secret with AWS credentials
AWS actuator assumes existence of a secret file (references in machine object) with base64 encoded credentials:
apiVersion: v1 kind: Secret metadata: name: aws-credentials-secret namespace: default type: Opaque data: aws_access_key_id: FILLIN aws_secret_access_key: FILLIN
You can use
examples/render-aws-secrets.sh
script to generate the secret:./examples/render-aws-secrets.sh examples/addons.yaml | kubectl apply -f -
-
Provision AWS resource
The actuator expects existence of certain resource in AWS such as:
- vpc
- subnets
- security groups
- etc.
To create them, you can run:
$ ENVIRONMENT_ID=aws-actuator-k8s ./hack/aws-provision.sh install
To delete the resources, you can run:
$ ENVIRONMENT_ID=aws-actuator-k8s ./hack/aws-provision.sh destroy
All machine manifests expect
ENVIRONMENT_ID
to be set toaws-actuator-k8s
.
-
Tear down machine-controller
Deployed machine API plane (
machine-api-controllers
deployment) is (among other controllers) runningmachine-controller
. In order to run locally built one, simply editmachine-api-controllers
deployment and removemachine-controller
container from it. -
Build and run aws actuator outside of the cluster
$ go build -o bin/manager sigs.k8s.io/cluster-api-provider-aws/cmd/manager
$ ./bin/manager --kubeconfig ~/.kube/config --logtostderr -v 5 -alsologtostderr
-
Deploy k8s apiserver through machine manifest:
To deploy user data secret with kubernetes apiserver initialization (under config/master-user-data-secret.yaml):
$ kubectl apply -f config/master-user-data-secret.yaml
To deploy kubernetes master machine (under config/master-machine.yaml):
$ kubectl apply -f config/master-machine.yaml
-
Pull kubeconfig from created master machine
The master public IP can be accessed from AWS Portal. Once done, you can collect the kube config by running:
$ ssh -i SSHPMKEY ec2-user@PUBLICIP 'sudo cat /root/.kube/config' > kubeconfig $ kubectl --kubeconfig=kubeconfig config set-cluster kubernetes --server=https://PUBLICIP:8443
Once done, you can access the cluster via
kubectl
. E.g.$ kubectl --kubeconfig=kubeconfig get nodes
-
Generate bootstrap user data
To generate bootstrap script for machine api plane, simply run:
$ ./config/generate-bootstrap.sh
The script requires
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
environment variables to be set. It generatesconfig/bootstrap.yaml
secret for master machine underconfig/master-machine.yaml
.The generated bootstrap secret contains user data responsible for:
- deployment of kube-apiserver
- deployment of machine API plane with aws machine controllers
- generating worker machine user data script secret deploying a node
- deployment of worker machineset
-
Deploy machine API plane through machine manifest:
First, deploy generated bootstrap secret:
$ kubectl apply -f config/bootstrap.yaml
Then, deploy master machine (under config/master-machine.yaml):
$ kubectl apply -f config/master-machine.yaml
-
Pull kubeconfig from created master machine
The master public IP can be accessed from AWS Portal. Once done, you can collect the kube config by running:
$ ssh -i SSHPMKEY ec2-user@PUBLICIP 'sudo cat /root/.kube/config' > kubeconfig $ kubectl --kubeconfig=kubeconfig config set-cluster kubernetes --server=https://PUBLICIP:8443
Once done, you can access the cluster via
kubectl
. E.g.$ kubectl --kubeconfig=kubeconfig get nodes
Other branches of this repository may choose to track the upstream Kubernetes Cluster-API AWS provider
In the future, we may align the master branch with the upstream project as it stabilizes within the community.