Skip to content

Latest commit

 

History

History
131 lines (123 loc) · 4.41 KB

install-cluster.md

File metadata and controls

131 lines (123 loc) · 4.41 KB

Install Kubernetes Cluster using kubeadm

Follow this documentation to set up a Kubernetes cluster on CentOS 7 Virtual machines.

This documentation guides you in setting up a cluster with one master node and one worker node.

Assumptions

Role FQDN IP OS RAM CPU
Master kmaster.example.com 172.42.42.100 CentOS 7 2G 2
Worker kworker.example.com 172.42.42.101 CentOS 7 1G 1

On both Kmaster and Kworker

Perform all the commands as root user unless otherwise specified

Pre-requisites

Update /etc/hosts

So that we can talk to each of the nodes in the cluster

cat >>/etc/hosts<<EOF
172.42.42.100 kmaster.example.com kmaster
172.42.42.101 kworker.example.com kworker
EOF
Install, enable and start docker service

Use the Docker repository to install docker.

If you use docker from CentOS OS repository, the docker version might be old to work with Kubernetes v1.13.0 and above

yum install -y -q yum-utils device-mapper-persistent-data lvm2 > /dev/null 2>&1
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo > /dev/null 2>&1
yum install -y -q docker-ce >/dev/null 2>&1

systemctl enable docker
systemctl start docker
Disable SELinux
setenforce 0
sed -i --follow-symlinks 's/^SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux
Disable Firewall
systemctl disable firewalld
systemctl stop firewalld
Disable swap
sed -i '/swap/d' /etc/fstab
swapoff -a
Update sysctl settings for Kubernetes networking
cat >>/etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

Kubernetes Setup

Add yum repository
cat >>/etc/yum.repos.d/kubernetes.repo<<EOF
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
Install Kubernetes
yum install -y kubeadm kubelet kubectl
Enable and Start kubelet service
systemctl enable kubelet
systemctl start kubelet

On kmaster

Initialize Kubernetes Cluster
kubeadm init --apiserver-advertise-address=172.42.42.100 --pod-network-cidr=10.244.0.0/16
Copy kube config

To be able to use kubectl command to connect and interact with the cluster, the user needs kube config file.

In my case, the user account is venkatn

mkdir /home/venkatn/.kube
cp /etc/kubernetes/admin.conf /home/venkatn/.kube/config
chown -R venkatn:venkatn /home/venkatn/.kube
Deploy flannel network

This has to be done as the user in the above step (in my case it is venkatn)

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Important Note:

If your virtual machine just has one network interface, the above flannel resource will work. If you used Vagrant to provision the virtual machine (using VirtualBox provider), the default eth0 interface will have ip address like 10.0.2.15. This is vagrant specific. You won't be able to connect to the virtual machine using ssh if you have only this network interface. This interface is useful only for doing "vagrant ssh" to get into the machine.

In your vagrant file you will have to add a public network with an IP address so that you can get to the machine from your host machine. This network interface will be added as eth1.

If this is the case, we need to modify the flannel resource to make eth1 as the standard interface. Otherwise it will pick eth0 and pod to pod communication won't work.

You can use the below command with the modified kube-flannel.yml (from my repo). I have added --iface eth1 option to the container.

kubectl apply -f https://raw.githubusercontent.com/justmeandopensource/kubernetes/master/vagrant-provisioning/kube-flannel.yml
Cluster join command
kubeadm token create --print-join-command

On Kworker

Join the cluster

Use the output from kubeadm token create command in previous step from the master server and run here.

Verifying the cluster

Get Nodes status
kubectl get nodes
Get component status
kubectl get cs

Have Fun!!