Skip to content

Commit

Permalink
Merge pull request #12 from rekuberate-io/11-move-sleepcycles-ops-to-…
Browse files Browse the repository at this point in the history
…runners-cronjobs

11 move sleepcycles ops to runners cronjobs
  • Loading branch information
akyriako authored May 28, 2024
2 parents 7f01de1 + d7c24bb commit eb4a9f7
Show file tree
Hide file tree
Showing 30 changed files with 1,544 additions and 761 deletions.
16 changes: 14 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
@@ -1,10 +1,12 @@

# Image URL to use all building/pushing image targets
#IMG_TAG ?= $(shell git rev-parse --short HEAD)
IMG_TAG ?= 0.1.2
IMG_TAG ?= 0.2.0
IMG_NAME ?= rekuberate-io-sleepcycles
DOCKER_HUB_NAME ?= $(shell docker info | sed '/Username:/!d;s/.* //')
IMG ?= $(DOCKER_HUB_NAME)/$(IMG_NAME):$(IMG_TAG)
RUNNERS_IMG_NAME ?= rekuberate-io-sleepcycles-runners
KO_DOCKER_REPO = $(DOCKER_HUB_NAME)/$(RUNNERS_IMG_NAME)
# ENVTEST_K8S_VERSION refers to the version of kubebuilder assets to be downloaded by envtest binary.
ENVTEST_K8S_VERSION = 1.24.2

Expand Down Expand Up @@ -144,4 +146,14 @@ $(HELMIFY): $(LOCALBIN)
test -s $(LOCALBIN)/helmify || GOBIN=$(LOCALBIN) go install github.com/arttor/helmify/cmd/helmify@latest

helm: manifests kustomize helmify
$(KUSTOMIZE) build config/default | $(HELMIFY) charts/sleepcycles
$(KUSTOMIZE) build config/default | $(HELMIFY) charts/sleepcycles

KO ?= $(LOCALBIN)/ko

.PHONY: ko
ko: $(KO) ## Download ko locally if necessary.
$(KO): $(LOCALBIN)
test -s $(LOCALBIN)/ko || GOBIN=$(LOCALBIN) go install github.com/google/ko@latest

ko-build-runner: ko
cd runners && ko build --bare .
232 changes: 162 additions & 70 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,51 +1,124 @@
![rekuberate-sleepcycle-banner.png](docs/images/rekuberate-sleepcycle-banner.png)

Define sleep & wake up cycles for your Kubernetes resources. Automatically schedule to shutdown Deployments, CronJobs, StatefulSets and HorizontalPodAutoscalers that occupy resources in your cluster and wake them up only when you need them, reducing that way the overall power consumption.
Define sleep & wake up cycles for your Kubernetes resources. Automatically schedule to shutdown **Deployments**, **CronJobs**,
**StatefulSets** and **HorizontalPodAutoscalers** that occupy resources in your cluster and wake them up **only** when you need them;
in that way you can:

> [!NOTE]
> You can read more in medium article [rekuberate-io/sleepcycles: an automated way to reclaim your unused Kubernetes resources](https://medium.com/@akyriako/rekuberate-io-sleepcycles-an-automated-way-to-reclaim-your-unused-kubernetes-resources-852e8db313ec).
- _schedule_ resource-hungry workloads (migrations, synchronizations, replications) in hours that do not impact your daily business
- _depressurize_ your cluster
- _decrease_ your costs
- _reduce_ your power consumption
- _lower_ you carbon footprint

## Getting Started
You’ll need a Kubernetes cluster to run against. You can use [KIND](https://sigs.k8s.io/kind) or [K3D](https://k3d.io) to get a local cluster for testing, or run against a remote cluster.
**Note:** Your controller will automatically use the current context in your kubeconfig file (i.e. whatever cluster `kubectl cluster-info` shows).
You’ll need a Kubernetes cluster to run against. You can use [KIND](https://sigs.k8s.io/kind) or [K3D](https://k3d.io) to get a local cluster for testing,
or run against a remote cluster.

> [!CAUTION]
> Earliest compatible Kubernetes version is **1.25**
### Samples

Under `config/samples` you will find a set manifests that you can use to test this sleepcycles on your cluster:

Under `config/samples` you will find an example manifest that you can use to test this controller:
#### SleepCycles

* _core_v1alpha1_sleepcycle_app_x.yaml_, manifests to deploy 2 `SleepCycle` resources in namespaces `app-1` and `app-2`

```yaml
apiVersion: core.rekuberate.io/v1alpha1
kind: SleepCycle
metadata:
name: sleepcycle-sample
name: sleepcycle-app-1
namespace: app-1
spec:
shutdown: "0 20 * * *"
shutdown: "1/2 * * * *"
shutdownTimeZone: "Europe/Athens"
wakeup: "30 7 * * 1-5"
wakeup: "*/2 * * * *"
wakeupTimeZone: "Europe/Dublin"
enabled: true
```
You need to provide to every `SleepCycle` the `shutdown` (mandatory) and `wakeup` (non-mandatory) policies via Cron expressions (**do not include seconds or timezone**).
Additionally you can provide schedules on different timezones via the (non-mandatory) fields `shutdownTimeZone` and `wakeupTimeZone`. If they're not provided they default to **UTC**.
The example above will set a `SleepCycle` schedule shutting down your workloads **every day at 20:00 Athens local time** and waking them up **every weekday at 07:30 Dublin local time**.
> [!NOTE]
> The cron expressions of the samples are tailored so you perform a quick demo. The `shutdown` expression schedules
> the deployment to scale down on _odd_ minutes and the `wakeup` schedule to scale up on _even_ minutes.

Every `SleepCycle` has the following **mandatory** properties:

- `shutdown`: cron expression for your shutdown schedule
- `enabled`: whether this sleepcycle policy is enabled

`SleepCycle` is a **Namespaced Custom Resource**, and the controller will monitor all the resources in the Namespace you installed the
`SleepCycle` manifest and they are marked with a `Label` that has as key `rekuberate.io/sleepcycle:` and as value the `name` of the manifest you created:
and the following **non-mandatory** properties:

- `shutdownTimeZone`: the timezone for your shutdown schedule, defaults to `UTC`
- `wakeup`: cron expression for your wake-up schedule
- `wakeupTimeZone`: the timezone for your wake-up schedule, defaults to `UTC`
- `successfulJobsHistoryLimit`: how many _completed_ CronJob Runner Pods to retain for debugging reasons, defaults to `1`
- `failedJobsHistoryLimit`: how many _failed_ CronJob Runner Pods to retain for debugging reasons, defaults to `1`
- `runnerImage`: the image to use when spawn CronJob Runner pods, defaults to `akyriako78/rekuberate-io-sleepcycles-runners`

> [!IMPORTANT]
> DO **NOT** ADD **seconds** or **timezone** information to you cron expressions.

#### Demo workloads

* _whoami-app-1_x-deployment.yaml_, manifests to deploy 2 `Deployment` that provisions _traefik/whoami_ in namespace `app-1`
* _whoami-app-2_x-deployment.yaml_, manifests to deploy a `Deployment`that provisions _traefik/whoami_ in namespace `app-2`

`SleepCycle` is a namespace-scoped custom resource; the controller will monitor all the resources in that namespace that
are marked with a `Label` that has as key `rekuberate.io/sleepcycle:` and as value the `name` of the manifest you created:

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-2
namespace: app-2
labels:
app: nginx-demo
rekuberate.io/sleepcycle: sleepcycle-sample
name: nginx-demo
namespace: default
app: app-2
rekuberate.io/sleepcycle: sleepcycle-app-2
spec:
...
...
replicas: 9
selector:
matchLabels:
app: app-2
template:
metadata:
name: app-2
labels:
app: app-2
spec:
containers:
- name: app-2
image: traefik/whoami
imagePullPolicy: IfNotPresent
```

### Running on the cluster
> [!IMPORTANT]
> Any workload in namespace `kube-system` marked with `rekuberate.io/sleepcycle` will be ignored by the controller **by design**.

## How it works

The diagram below describes how `rekuberate.io/sleepcycles` are dealing with scheduling a `Deployment`:

1. The `sleepcycle-controller` **watches** periodically, every 1min, all the `SleepCycle` custom resources for changes (in **all** namespaces).
2. The controller, for **every** `SleepCycle` resource within the namespace `app-1`, collects all the resources that have been marked with the label `rekuberate.io/sleepcycle: sleepcycle-app1`.
3. It provisions, for **every** workload - in this case deployment `deployment-app1` a `CronJob` for the shutdown schedule and optionally a second `CronJob` if a wake-up schedule is provided.
4. It provisions a `ServiceAccount`, a `Role` and a `RoleBinding` **per namespace**, in order to make possible for runner-pods to update resources' specs.
5. The `Runner` pods will be created automatically by the cron jobs and are responsible for scaling the resources up or down.


![SCR-20240527-q9y.png](docs/images/SCR-20240527-qei.png)

> [!NOTE]
> In the diagram it was depicted how `rekuberate.io/sleepcycles` scales `Deployment`. The same steps count for a
> `StatefulSet` and a `HorizontalPodAutoscaler`. There are two exception though:
> - a `HorizontalPodAutoscaler` will scale down to `1` replica and not to `0` as for a `Deployment` or a `Statefulset`.
> - a `CronJob` has no replicas to scale up or down, it is going to be enabled or suspended respectively.

## Deploy

### From sources

1. Build and push your image to the location specified by `IMG` in `Makefile`:

Expand All @@ -55,82 +128,90 @@ IMG_TAG ?= $(shell git rev-parse --short HEAD)
IMG_NAME ?= rekuberate-io-sleepcycles
DOCKER_HUB_NAME ?= $(shell docker info | sed '/Username:/!d;s/.* //')
IMG ?= $(DOCKER_HUB_NAME)/$(IMG_NAME):$(IMG_TAG)
RUNNERS_IMG_NAME ?= rekuberate-io-sleepcycles-runners
KO_DOCKER_REPO ?= $(DOCKER_HUB_NAME)/$(RUNNERS_IMG_NAME)
```

```sh
make docker-build docker-push
```

2. Deploy the controller to the cluster with the image using `IMG`:
2. Deploy the controller to the cluster using the image defined in `IMG`:

```sh
make deploy
```

or

3. Deploy the controller to the cluster with the image using a **Helm chart**:
and then deploy the samples:

```sh
helm repo add sleepcycles https://rekuberate-io.github.io/sleepcycles/
helm repo update
helm upgrade --install sleepcycles sleepcycles/sleepcycles -n rekuberate-system --create-namespace
kubectl create namespace app-1
kubectl create namespace app-2
kubectl apply -f config/samples
```

4. Install Instances of Custom Resources:
#### Uninstall

```sh
kubectl apply -f config/samples/
make undeploy
```

### Uninstall CRDs
To delete the CRDs from the cluster:
### Using Helm (from sources)

If you are on a development environment, you can quickly test & deploy the controller to the cluster
using a **Helm chart** directly from `config/helm`:

```sh
make uninstall
helm install rekuberate-io-sleepcycles config/helm/ -n <namespace> --create-namespace
```

### Undeploy controller
UnDeploy the controller to the cluster:
and then deploy the samples:

```sh
make undeploy
kubectl create namespace app-1
kubectl create namespace app-2
kubectl apply -f config/samples
```
or if you have installed via Helm:
#### Uninstall

```shell
helm uninstall rekuberate-io-sleepcycles
helm uninstall rekuberate-io-sleepcycles -n <namespace>
```

## Contributing
Please refer to our [Contributing Guidelines](CONTRIBUTING.md)
### Using Helm (from repo)

### How it works
This project aims to follow the Kubernetes [Operator pattern](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/)
On the other hand if you are deploying on a production environment, it is **highly recommended** to deploy the
controller to the cluster using a **Helm chart** from its repo:

It uses [Controllers](https://kubernetes.io/docs/concepts/architecture/controller/)
which provides a reconcile function responsible for synchronizing resources untile the desired state is reached on the cluster

### Test It Out
1. Install the CRDs into the cluster:

```sh
make install
helm repo add sleepcycles https://rekuberate-io.github.io/sleepcycles/
helm repo update
helm upgrade --install sleepcycles sleepcycles/sleepcycles -n rekuberate-system --create-namespace
```

2. Run your controller (this will run in the foreground, so switch to a new terminal if you want to leave it running):
and then deploy the samples:

```sh
make run
kubectl create namespace app-1
kubectl create namespace app-2
kubectl apply -f config/samples
```
#### Uninstall

![debugging the controller](docs/images/SCR-20221222-hij.png)
```shell
helm uninstall rekuberate-io-sleepcycles -n <namespace>
```

> [!TIP]
> You can also run this in one step by running: `make install run`
## Develop

This project aims to follow the Kubernetes [Operator pattern](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/). It uses [Controllers](https://kubernetes.io/docs/concepts/architecture/controller/)
which provides a reconcile function responsible for synchronizing resources until the desired state is reached on the cluster.

### Modifying the API definitions
### Controller

#### Modifying the API definitions
If you are editing the API definitions, generate the manifests such as CRs or CRDs using:

```sh
Expand All @@ -145,31 +226,42 @@ make install
```

> [!TIP]
> You can debug the controller in the IDE of your choice by hooking to the `main.go` or you can start
> the controller without debugging with:
> You can debug the controller in the IDE of your choice by hooking to the `main.go` **or** you can start
> the controller _without_ debugging with:

```sh
make run
```

> [!TIP]
> Run `make --help` for more information on all potential `make` targets
> Run `make --help` for more information on all potential `make` targets
> More information can be found via the [Kubebuilder Documentation](https://book.kubebuilder.io/introduction.html)

#### Build

You always need to build a new docker container and push it to your repository:

```sh
make docker-build docker-push
```

More information can be found via the [Kubebuilder Documentation](https://book.kubebuilder.io/introduction.html)
> [!IMPORTANT]
> In this case you will need to adjust your Helm chart values to use your repository and container image.

## License
### Runner

Copyright 2022.
#### Build

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
```sh
make ko-build-runner
```

http://www.apache.org/licenses/LICENSE-2.0
> [!IMPORTANT]
> In this case you will need to adjust the `runnerImage` of your `SleepCycle` manifest to use your own Runner image.

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
### Uninstall CRDs
To delete the CRDs from the cluster:

```sh
make uninstall
```
Loading

0 comments on commit eb4a9f7

Please sign in to comment.