This repo groups the relevant information about the WRI-API Ecosystem in terms of Infrastructure, Deployment and Provisioning
For information on architecture, see this file
In this section we are going to see how to deploy a production-ready Kubernetes cluster on "Bare Metal".
To create a Kuberentes Cluster on VM instances you need to do the following:
- Be sure that you already have at least 3 VM instances (from now, Nodes)
- Enable a private network between them. If you are using Digital Ocean Droplets, you can do it when creating the instances. This should be enabled by default on using AWS EC2 or Google Compute Engine Instances.
Select the Kube Master and connect to it via ssh.
ssh root@<ip>
- Install kubectl in the master
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
- Install kubelet and kubeadm
apt-get update && apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
# Install docker if you don't have it already.
apt-get install -y docker-engine
apt-get install -y kubelet kubeadm kubernetes-cni
Once kubectl, kubelet and kubeadm are properly installed in the master, connect to the other nodes and do the same. (We will see how to create a custom image to make this step much easier).
- Initializing the master
kubeadm init
- Set the KUBECONFIG variable to start using your cluster:
sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
Set this variable permanently modifying the profile file of the user:
sudo vim ~/.profile
Add the following content to this file:
KUBECONFIG="/root/admin.conf"; export KUBECONFIG
For Kubeadm 1.6 with Kubernetes 1.6.x:
kubectl apply -f http://docs.projectcalico.org/v2.2/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml
Connect to each node via ssh and do the following to join the nodes to the Cluster
kubeadm join --token <token> <master-ip>:<master-port>
You can see the token just using the command kubeadm token list
logged in the master.
You can now check if everything was fine doing:
kubectl cluster-info
kubectl get nodes
The Microservices will be registered through the API Gateway so we only need to expose the Api Gateway Deployment. To deploy the Gateway in the cluster follow the next steps:
- Create the Deployment.
kubectl apply -f control-tower-deployment.yaml
- Expose it with a NodePort type Service (with a static nodeport value).
kubectl apply -f control-tower-service-staging.yaml
Important!
It is quite important to understand what we've just done.
Our microservices will be registered in the Gateway and they won't be exposed to the internet. The Gateway has in this case 3 replicas of the Pod. So basically, when we create the service that exposes this Deployment we are proxying the resulting IP:PORT of each POD to a common IP in our internal network.
If we are into the network the microservices are available in different ways:
- Pods endpoints: Internal-POD-IP:Container-Port
- Internal Kube Network Service IP: Service-IP:Service-Port
- Private Node IP (because of NodePort Service): Private-Node-IP:Node-Port
In this case and because of the lack of the LoadBalancer service in "Bare Metal" we need to proxypass the external and static public IP to the Private Node IPs where the gateway service is exposing.
To do that, we just need to install Nginx in the KubeMaster and do a basic configuration:
To make this works, we needed to set a static NodePort (in this case 31000) to be pointing at it.
upstream control-tower {
least_conn;
server <privateIpNodeOne>:31000 max_fails=0 fail_timeout=0;
server <privateIpNodeTwo>:31000 max_fails=0 fail_timeout=0;
server <privateIpNodeThree>:31000 max_fails=0 fail_timeout=0;
}
server {
server_name <externalIP>;
listen 80;
location / {
proxy_pass http://control-tower;
}
}
What about this? https://github.com/kubernetes/contrib/tree/master/service-loadbalancer
In progress...
gcloud container clusters create <clusterName>
- Create the Deployment.
kubectl apply -f control-tower-deployment.yaml
- Expose it with a NodePort type Service (mapping to 80).
kubectl apply -f control-tower-service-production.yaml
gcloud compute addresses create <ipName> --global
kubectl apply -f gateway-ingress.yaml
Now we have the pods exposing in the pod endpoints in its own containerPorts. The service maps that to its own service IP and port. And with the ingress resource we enable the external traffic through the static external IP to the gateway service. In this case we do not need to worry about balancing between node ips.
Creating the readinessProbe:
readinessProbe:
# an http probe
httpGet:
path: /healthz
port: 9000
scheme: HTTP
$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
$ chmod 700 get_helm.sh
$ ./get_helm.sh
https://docs.google.com/document/d/1Ks0l-n-6korqVjLMMMV7NZrlGFPG-q_t2IEYrS5ZWD8/edit
https://cloud.google.com/container-builder/docs/
Check this out -> https://github.com/Vizzuality/python-skeleton-grpc
-
Get istio:
curl -L https://git.io/getIstio | sh -
-
Add the istioctl client to your PATH
export PATH=$PWD/bin:$PATH
-
Run the following command to determine if your cluster has RBAC
kubectl api-versions | grep rbac
-
It is highly recommended to create a clusterrolebinding
kubectl create clusterrolebinding myname-cluster-admin-binding --clusterrole=cluster-admin [email protected]
-
Install the istio-rbac config:
kubectl apply -f install/kubernetes/istio-rbac-alpha.yaml
-
Install Istio:
kubectl apply -f install/kubernetes/istio.yaml
Optional:
Enabling Metrics:
kubectl apply -f install/kubernetes/addons/prometheus.yaml
kubectl apply -f install/kubernetes/addons/grafana.yaml
kubectl apply -f install/kubernetes/addons/servicegraph.yaml
-
Start the httpbin sample:
kubectl apply -f <(istioctl kube-inject -f samples/apps/httpbin/httpbin.yaml)
-
Create the Ingress Resource for the httpbin service:
cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: simple-ingress
annotations:
kubernetes.io/ingress.class: istio
spec:
rules:
- http:
paths:
- path: /headers
backend:
serviceName: httpbin
servicePort: 8000
- path: /delay/.*
backend:
serviceName: httpbin
servicePort: 8000
EOF
- Determine the ingress URL:
Because this cluster is running on Digital Ocean, there is no LoadBalancer service available.
Get the ingress-istio NodeIP:
kubectl get po -l istio=ingress -o jsonpath='{.items[0].status.hostIP}'
Get the Port:
kubectl get svc istio-ingress
Go to the URL: http://:/headers