touch .env
make start
make docker-update
This command will ask for loging, bump the version, build the new docker image and push it to the private repo.
# If `Error: could not find tiller` run:
kubectl -n kube-system delete deployment tiller-deploy
kubectl -n kube-system delete service/tiller-deploy
- v2 with redis
-
docker-compose
vsKompose
- Create docker image
- Add docker image to registry
- Init Helm in minikube
- Create Chart
- Check CRUD locally
- Create helm release
- Read about Makefile
- Fix Makefile
- Replace Minikube with docker-for-desktop
- Replace cloud.canister.io with registry.gitlab.com
- Command for
kubectl create secret generic regcred \ --from-file=.dockerconfigjson=<path/to/.docker/config.json> \ --type=kubernetes.io/dockerconfigjson
- Find to way to get VERSION to helm Values because tag ":latest" doesn't work
- Change to docker login
--password-stdi
- Add project to CircleCI
- Read about CircleCI and Kubernetes
- Do we have to use a cloud provider to use CircleCI with Kubernetes?
- Create gcp cluster
- Make sure the db (Redis) tests are passing
- Remove everything from
charts/values
, secret values should be in.env
and the rest in the Makefile - Change selectors
- Change docker image registry. Binary will be copied by CirclCI
- Connect to
gcp-cluster
by edit~/.kube/config
file - Setup k8s loadbalancer in helm chart and run
curl localhost:3001/todos
- Run pod
kubectl exec -it gotodo-b6487675f-xtg2q /bin/sh
withcommand: ["/bin/sh"]
,tty: true
andstdin: true
- Is
Chart.yaml
required? - Add Jenkins and/or CircleCI
- Fix CHART_NAME
- Deploy on GCP
- Run all tests (including the one with Redis) on CircleCI for a specific branch
- Replace NodePort by LoadBalancer
- Deploy manually gotodo with helm
- Add tests
- Read more about Ingress
- Replace LoadBalancer with Ingress
- Debug go code inside docker container
- Write some doc about it
- Find equivalent to package.json/requirements.txt for go
- Check auth
- Oauth with google
- session in Redis
- Update memory profile commands
- Add simple gotodo frontend
- Organise backend, frontend code and devops code
- Update frontend with tailwindcss
go test -coverprofile temp/cover.out ./... && go tool cover -html=temp/cover.out
go test -memprofilerate 1 -memprofile temp/mem.out ./model
# Get model from ...
go tool pprof -web temp/model.a temp/mem.out
#!/bin/bash
set -eo pipefail
ingress="$(kubectl get pods --output=jsonpath='{.items[*].metadata.name}' |
xargs -n1 | grep "ingress-nginx" | head -n1)"
# cache all hosts that pass through the ingress
hosts="$(kubectl get ingress nginx-ingress \
--output=jsonpath='{.spec.rules[*].host}' | xargs -n1)"
# cache pods
pods="$(kubectl get pods)"
# cache ingress logs of the last 90min
logs="$(kubectl logs --since=90m "$ingress")"
# iterate over all deployments
kubectl get deployment --output=jsonpath='{.items[*].metadata.name}' |
xargs -n1 |
while read -r svc; do
# skip svc that don't have pods running
echo "$pods" | grep -q "$svc" || {
echo "$svc: no pods running"
continue
}
# skip svcs that don't pass through the ingress
echo "$hosts" | grep -q "$svc" || {
echo "$svc: not passing through ingress"
continue
}
# skip svcs with pods running less than 1h
echo "$pods" | grep "$svc" | awk '{print $5}' | grep -q h || {
echo "$svc: pod running less than 1h"
continue
}
# check if any traffic to that svc was made through the ingress in the
# last hour, scale it down case none
echo "$logs" | grep -q "default-$svc" || {
echo "$svc: scaling down"
kubectl scale deployments "$svc" --replicas 0 --record || true
}
done