// TODO(user): Add simple overview of use/purpose
// TODO(user): An in-depth paragraph about your project and overview of use
You’ll need a Kubernetes cluster to run against. You can use KIND to get a local cluster for testing, or run against a remote cluster.
Note: Your controller will automatically use the current context in your kubeconfig file (i.e. whatever cluster kubectl cluster-info
shows).
- Install Instances of Custom Resources:
kubectl apply -f config/samples/
kubectl apply -f config/samples/WhatYouWantTo
- Build image :
make docker-build IMG=controller:latest
- Save image as file to then send it to minikube
docker save -o ./savedimage controller:latest
then on minikube :
docker load -i savedimage
- Deploy the controller to the cluster with the image specified by
IMG
:
make deploy IMG=controller:latest
To delete the CRDs from the cluster:
make uninstall
UnDeploy the controller from the cluster:
make undeploy IMG=controller:latest
// TODO(user): Add detailed information on how you would like others to contribute to this project
This project aims to follow the Kubernetes Operator pattern.
It uses Controllers, which provide a reconcile function responsible for synchronizing resources until the desired state is reached on the cluster.
- Install the CRDs into the cluster:
make install
- Run your controller (this will run in the foreground, so switch to a new terminal if you want to leave it running):
make run
NOTE: You can also run this in one step by running: make install run
If you are editing the API definitions, generate the manifests such as CRs or CRDs using:
make manifests
NOTE: Run make --help
for more information on all potential make
targets
More information can be found via the Kubebuilder Documentation
Copyright 2023.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
This Part explain how to reproduce this project starting from kubebuilder init.
I. Init your Project
username:~/closedloop$ kubebuilder init --domain closedloop.io --repo closedloop //Init folder
username:~/closedloop$ kubebuilder create api --group closedlooppooc --version v1 --kind ClosedLoop //create API and Controller
username:~/closedloop$ kubebuilder create api --group closedlooppooc --version v1 --kind Monitoring //create API and Controller
username:~/closedloop$ kubebuilder create api --group closedlooppooc --version v1 --kind Decision //create API and Controller
username:~/closedloop$ kubebuilder create api --group closedlooppooc --version v1 --kind Execution //create API and Controller
username:~/closedloop$ kubebuilder create api --group closedlooppooc --version v1 --kind Monitoringv2 //create API and Controller
II. Complet your API SPEC
Go to the folder : api/yourVersion and complete all the _types.go file to describe your CR Spec and Status
III. Generate your CRD and configuration file based on what you did on the _types.go files
username:~/closedloop$ make generate && make manifests && make install
IV. Complete the logic of your controller
Complete code of the controller files in the "/controllers" folder
V. Run your Project to test it localy (This is not like in production, refer to VII)
username:~/closedloop$make run
VI. Create your CR Ressources
Complete/fill in the files on /config/samples as a example and excecute the command:
username:~/closedloop$ kubectl apply -f config/samples/closedlooppooc_v1_closedloop.yaml //(Example)
VII. Deploy your Operator like in production
Excecute the commands:
username:~/closedloop$ make docker-build IMG=controller:latest && docker save -o ./savedimage controller:latest
For Minikube : Transfert the savedimage file to your minikube VM and build it : example
Run From Minikube (ssh) to retreive from where your build the image
$scp Username@IP:/Path/To/savedimage ./ // Copy the file in local
$docker load -i savedimage // Load the Image in Minkube
Run on the Kubebuilder Host to Deploy your Operator, RBAC file, ..) :
username:~/closedloop$ make deploy IMG=controller:latest
VIII. Load the Proxy Pod
Run From Minikube (ssh)
$scp username@IP:/Path/To/closedloop/RESTPod-Listen/* ./ && docker build -t restpod:latest . //This will retreive and build the image needed for the proxy pod
VIV. Deploy the 2 Managed Systems
- Exporter :
Run From Minikube (ssh)
$scp username@IP:/Path/To/closedloop/exporter/* ./ && docker build -t exporter . //This will retreive and build the image needed for the exporter
Run on the Kubebuilder Host
username:~/closedloop$ kubectl create -f ./exporter/exporter.yaml //This will create the exporter
- PodToPushData to Proxy-Pod :
Run From Minikube (ssh)
$scp username@IP:/Path/To/closedloop/RESTSys/* ./ && docker build -t data-send:latest . //This will retreive and build the image needed for the POdToPushData to Proxy-Pod
Run on the Kubebuilder Host
username:~/closedloop$ kubectl create -f ./RESTSys/data-send-deployment.yaml //This will create the PodToPushData
Note: This description also contains details of manual configuration not mentioned before. They are needed to tune data sender as we do not use DNS service for local name resolution.
make undeploy IMG=controller:latest
make uninstall
We assume all code has already been provided
make generate && make manifests && make install
make docker-build IMG=controller:latest && docker save -o ./savedimage controller:latest
2. ssh to minikube (you sh to the master node): create operator image and load the image; check images
scp [email protected]://home/minikube/demos/closedloop-ad/closedloop/savedimage ./
docker load -i savedimage
docker image list
~/.../closedloop/make deploy IMG=controller:latest
Note: PodToPushData and Proxy-Pod together correspond to (represent) one of the two managed systems while exporter represents the second managed system.
Note: PodToPushData and Proxy-Pod work together to feed respective instance of a closed loop with monitoring data (by their design and the instantiation process, both of them correspond to one common instance of closed loop). PodToPushData generates random numbers for CPU, RAM and Disk usage and sends them to the Proxy-Pod. Proxy-Pod runs Python Simple HTTP Server that receives (PUT) the requests form PodToPushData Pod and resends them to the closed loop by accessing and modifying the value of parameter Data (and also Time) in the spec section of the Monitoring Custom Resource. This custom resource represents a given instance of the closed loop. Changing the value of Data/Time parameter pair triggers the reconciliation loop of the Monitoring operator thereby propelling the whole closed loop to run.
scp [email protected]://home/minikube/demos/closedloop-ad/closedloop/RESTPod-Listen/* ./ &&
socker build -t restpod:latest .
Note: exporter is a Deployment running nginx web server togehter with a Python script that cyclically generates random values for the usage of CPU, RAM and Disk and writes then into the index.html of the server. The server can then be queried (GET) for the contents of the index page. However, currently we do not use exporter in our demos.
scp [email protected]://home/minikube/demos/closedloop-ad/closedloop/exporter/* ./ && docker build -t exporter .
kubectl create -f ./exporter/exporter.yaml
3. prepare the image for the data-sender Pod (i.e., PodToPushData that sends data to the Proxy-Pod) and create the data-sender Pod (PodToPushData)
scp [email protected]://home/minikube/demos/closedloop-ad/closedloop/RESTSys/* ./ && docker build -t data-send:latest .
Below, we create a CRB (Cluster Role Binding) to allow ProxyPod accessing (i.e., editing) the Monitoring CR (with somewhat confusing name of the CR being closedloop-v2-monitoring-xyz...).
~/demos/closedloop/RESTPod-Listen$ kubectl apply -f .
kubectl apply -f config/samples/closedlooppooc_v1_closedloop3.yaml
kubectl apply -f config/samples/closedlooppooc_d_v1_closedloop3.yaml
kubectl logs -f -n closedloop-system closedloop-controller-manager-7d9bf7cffd-b4g7n
kubectl create -f ./RESTSys/data-send-deployment.yaml
To be done each time for a newly run data-sender instance !!!
look for POST message and notice the ProxyPod service name (for DNS resolution) in the form: closedloop-v2-monitoring-deployment-service.com:80
cat data.go
ip a
take note of the eth0 IP address above - this is the k8s node address to be used in the NodePort service type for the ProxyPod
(alternatively to the above, you can simply run "$ minikube ip" on the minikube/kubebuilder host)
on kubebuilder, login to the data-sender Pod to set the NodePort IP address for the ProxyPod service (remember to adjust the name of your data-send-deployment Pod)
kubectl get pods -A ==> check the name of data-sender Pod
kubectl exec --stdin --tty data-send-deployment-6c9f7dd689-qstdr -- /bin/bash
in the data-sender Pod, insert a DNS entry in the hosts file (adjust the address to your environment)
nano /etc/hosts
and add a line as follows:
192.168.49.2 closedloop-v2-monitoring-deployment-service.com
Note: closedloop-v2-monitoring-deployment-service.com is the FQDN of the ProxyPod service as hardcoded in the data-sender Pod program. If one sets a local DNS server able to resolve that FQDN onto the minikube node IP address than the above change is not needed. Configuring the receiver of the monitoring data is always specific and can be troublesome. Future work could focus on integrating with Prometheus, etc. But for now we are fine with workarounds as the one above.
For visibility reasons, it is recommended to open 3 terminals of k9s and in each of them observe (Ctrl-D) the spec section of the custom resource Monitoring2, Decision and Excecution, respectively. One then will be able to easily trace the change of the spec properties involved in the message flow. Leverage on the use of Kubernetes ecosystem tools!
~/demos/closedloop/$ kubectl apply -f ./RESTPod-Listen/
kubectl exec --stdin --tty data-send-deployment-6c9f7dd689-qstdr -- /bin/bash
go run projects/data.go
go run projects/data.go [cpu] [memory] [disk]