-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathkubernetes quick start.txt
365 lines (255 loc) · 10.5 KB
/
kubernetes quick start.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
introduction
pod - smallest unit within kubernetes
could be one container or a group of containers;
pods exist within the cluster formed by all the worker nodes in the kubernetes cluster, with a master.
needs network, etcd and kubectl to add networking, configuration and state management to the cluster and all pods within that.
installation
master - has kubelet, dns, network, kubectl and kubeadm
node 1 - has kubelet, kubectl and kubeadm
node 2 - has kubelet, kubectl and kubeadm
The first thing that we are going to do is use SSH to log in to all machines.
Once we have logged in, we need to elevate privileges using sudo.
sudo su
Disable SELinux.
setenforce 0 \
&& sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
Enable the br_netfilter module for cluster communication.
modprobe br_netfilter \
&& echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
Disable swap to prevent memory allocation issues.
swapoff -a
vim /etc/fstab. -> Comment out the swap line
**can cause the nodes to report to the master incorrect ram information because of swap. therefore, always turn it off
Install Docker CE.
Install the Docker prerequisites.
yum install -y yum-utils device-mapper-persistent-data lvm2
Add the Docker repo and install Docker.
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo \
&& yum install -y docker-ce
Add the Kubernetes repo.
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
Install Kubernetes.
yum install -y kubelet kubeadm kubectl
Reboot.
Enable and start Docker and Kubernetes.
systemctl enable docker \
&& systemctl enable kubelet \
&& systemctl start docker \
&& systemctl start kubelet
Check the group Docker is running in.
docker info | grep -i cgroup
Set Kubernetes to run in the same group.
sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Reload systemd for the changes to take effect, and then restart Kubernetes.
systemctl daemon-reload \
&& systemctl restart kubelet
*Note: Complete the following section on the MASTER ONLY!
Initialize the cluster using the IP range for Flannel.
kubeadm init --pod-network-cidr=10.244.0.0/16
Copy the kubeadmin join command.
Exit sudo and run the following:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Deploy Flannel.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
##this is to do network overlay - cluster mode of communication mentioned in docker swarm
Check the cluster state.
Kubectl get pods —all-namespaces
Note: Complete the following steps on the NODES ONLY!
Run the join command that you copied earlier, then check your nodes from the master.
kubeadm is like a cheat command that does most of the setting up for us.
on the master now:
kubectl get nodes
all needed things like on the master coredns, etcd, kube-apiserver, kube-controller-manager,
kube-proxy, kube-scheduler, kube-flannel get created as containers
under kube-system namespace on the master upon following all these steps.
on the worker nodes, u only see flannel (network overlay) and kube-proxy pods being setup
masters and nodes:
on the master node
etcd api-server scheduler controller-manager coredns proxy flannel- all being pods
on the worker node
proxy
flannel (netowrk overlay)
kubelet container runtime (these two are daemons running on the worker node)
kubectl get pods --all-namespaces -o wide # -o wide show u more info
kubernetes is running within docker, installed within all master and worker nodes within kubernetes.
kubernetes is basically manipulating docker under the hood.
kubernetes uses an api server to manipulate docker
pods and containers:
kubectl get namespaces
default namespace is where everything gets put in by default
kubectl create namespace podexample
kubernetes is declarative
create a yaml file with declarative statements mentioning what needs to happen within the pod and what all basically constitues a pod
kubectl create -f ./pod-example.yaml
pod-example.yaml:
apiVersion: v1
kind: Pod
metadata:
name: examplepod
namespace: podexample
spec:
volumes:
- name: html
emptyDir: {}
containers:
- name: webcontainer
image: nginx
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
- name: filecontainer
image: debian
volumeMounts:
- name: html
mountPath: /html
command: ["/bin/sh","-c"]
args:
- while true; do
date >> /html/index.html;
sleep 1;
done
Here, on the second container, all u are doing with args is setting a command that would run continously
so it doesnt run and exit automatically.
as the second container doesnt have a command to run, it will shut down if nothing is mentioned.
kubectl --namespace=podexample delete pod exmaplepod
create the pod again
kubectl --namesapce=podexample get pods
kubectl --namesapce=podexample get pods - o wide # gives u more info about the pod like age, ip, node status etc.
now, if u curl using the ip, u will see the nginx container reponse displaying contents of index.html, the same one onto which
the other filecotnainer has been writing output of date comamnd to. this is possible because of the fact that we have
shared volume between teh two containers.
## basically, ur pod is composed of two containers, and a shared volume. the two cotainers have the same ip info;
the same ip is listed as pod's ip; u are packaging the working of a webcontainer and a filecotnainer into one entity called podexample pod.
networking:
flannel network installed here
flannel doesnt invoke network policies; there isnt post-configuration
flanneld is run on each hsot
stores state in etcd
its a packet forwarder
layer 3 ipv4 network
ip route gives u all the routes setup within the host:
root@htappd01286:/proc/sys/net/bridge $ ip route
default via 10.62.212.1 dev eno16780032 proto static metric 100
10.62.212.0/22 dev eno16780032 proto kernel scope link src 10.62.215.51 metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
172.18.0.0/16 dev docker_gwbridge proto kernel scope link src 172.18.0.1
networking and ip address mgmt is provided by a network-plugin
all kube-system pods except core dns are with ur host's network;
once flannel is setup with the host netwrok, it then assigns networking to core dns pods and assigns pod network ips to them.
etcd is updated with ip information
network plugin (flannel here) configures iptables on the nodes to setup routing that allows communciation between pods and nodes as well as with pods-on
other nodes within the cluster.
dns:
dns pods have A records for all containers within the cluster; if u get into a pod and lookup resolv.conf file, u wil see
that the cluster's dns server (ip of dns pod) is set as nameserver
kubectl get pods
kubectl exec -it <pod-name> /bin/bash
$ cat /etc/resolv.conf shows nameserver as dns pod ip.
this way u dont have to worry about conenctivity between any containers within one pod, or between other pods in a cluster.
replicasets:
set of pods that all have the same label
replicas-example.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend
labels:
app: nginx
tier: frontend
spec:
#modify replicas acc' to your case
replcias: 2
selector:
matchLabels:
tier: frontend
matchExpressions:
- { key: tier, operator: In, values: [frontend] }
template:
metadata:
labels:
app: nginx
tier: frontend
spec:
containers:
- name: nginx
image: image-name
ports:
- containerPort: 80
kubectl -create -f ./replicas-example.yaml
kubectl describe rs/frontend
kubectl scale rs/frontend --replicas=4
kubectl scale rs/frontend --replicas=1
-- these statement are declarative;
services:
put all pods behind a proxy and load balance the situation
kube-proxy
puts all pods that match a selector label and puts them behind a proxy
kubectl create -f ./replica-example.yaml
kubectl get pods
kubectl describe <pod-name>
./service-example.yaml:
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: nginx ## this selector we defined while creating replicasets;
#so this service uses this label to identify the pods it needs to grab and
#puts them behind a proxy with client side exposed port 32768.
ports:
- protocol: TCP
port: 32768
targetPort: 80
kubectl create -f ./service-example.yaml
kubectl describe service my-service
-- gives u details about the service like its name, namespace, labels, selector, ip, endpoint ips etc..
endpoint ips are obviously the endpoint ip for the pods within the cluster, mathcing the selector provided while creating the service.
service -> replicasets -> pods -> containers and any shared volumes that make up one use case.
u just need to either use the service's name or ip: 32768 to retrieve the service behind it
it obfuscates the pods; u can scale the service up/down by scaling the replica set!!
if a node dies and a pod within it, because kubernetes always is declarative it will create a new pod in another node for u to match the replica set definition.
deployments:
deployments manage replica sets
./deployexample.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: <image-name>:tag
ports:
- containerPort: 80
kubectl create -f ./deployexample.yaml
kubectl create -f ./ser vice-example.yaml
kubectl describe service my-service
kubectl set image deployment.v1.apps/example-deployment nginx=<image-name>:v2
update deplyoment image
kubectl get deployment
kubectl describe deployment
gives u the ability to update services without having any pods. -- always available types.
##kubernetes essentials is the next stop