-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathkubernetes - the hard way.txt
1796 lines (1408 loc) · 67.1 KB
/
kubernetes - the hard way.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
https://github.com/kelseyhightower/kubernetes-the-hard-way
intro:
using 6 total cloud servers:
2 controllers
2 workers
1 kube api load balancer and 1 remote kubectl workstation to
connect to the cluster.
controllers
- etcd
- kube-apiserver
- nginx (healthz endpoint exposed to 80)
- kube-controller-manager
- kube-scheduler
workers
- containerd
- kubelet - kubernetes worker node client basically.
- kube-proxy -deals with networking bw the two worker nodes.
kube api load balancer will be hit by the remote kubectl to connect
to and issue commands to the cluster.
client tools:
In order to proceed with Kubernetes the Hard Way, there are some client tools that you need to install on your local workstation.
These include cfssl and kubectl. This lesson introduces these tools and guides you through the process of installing them.
After completing this lesson, you should have cfssl and kubectl installed correctly on your workstation.
You can find more information on how to install these tools, as well as instructions for OS X/Linux, here: https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/02-client-tools.md
Commands used in the demo to install the client tools in a Linux environment:
cfssl:
wget -q --show-progress --https-only --timestamping \
https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 \
https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64
sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl
sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
cfssl version
If you want to work on an i386 machine, use these commands to install cfssl instead:
wget -q --show-progress --https-only --timestamping \
https://pkg.cfssl.org/R1.2/cfssl_linux-386 \
https://pkg.cfssl.org/R1.2/cfssljson_linux-386
chmod +x cfssl_linux-386 cfssljson_linux-386
sudo mv cfssl_linux-386 /usr/local/bin/cfssl
sudo mv cfssljson_linux-386 /usr/local/bin/cfssljson
cfssl version
kubectl:
wget https://storage.googleapis.com/kubernetes-release/release/v1.16.3/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
kubectl version --client **use --client flag to ensure kubectl client binary version is returned.
why ca and tls
cert used to confirm identity.
ca confirms that a certificate is valid.
client certs - provides authentication to various users: admin, kube-controller-manager, kube-proxy, kube-scheduler and kubelet client
kubernetes api server certificate - tls cert for kubernetes api
service account key apir - kube uses a cert to sign service account tokens, so we need to provide a cert for that purpose.
provision ca:
In order to generate the certificates needed by Kubernetes, you must first provision a certificate authority. This lesson will guide you through the process of provisioning a new certificate authority for your Kubernetes cluster.
After completing this lesson, you should have a certificate authority, which consists of two files: ca-key.pem and ca.pem.
Here are the commands used in the demo:
cd ~/
mkdir kthw
cd kthw/
UPDATE: cfssljson and cfssl will need to be installed. To install, complete the following commands:
sudo curl -s -L -o /bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
sudo curl -s -L -o /bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
sudo curl -s -L -o /bin/cfssl-certinfo https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
sudo chmod +x /bin/cfssl*
Use this command to generate the certificate authority. Include the opening and closing curly braces to run this entire block as a single command.
cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "8760h"
}
}
}
}
EOF
cat > ca-csr.json << EOF
{
"CN": "Kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "CA",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
}
$ ls -ltr
total 20
-rw-rw-r-- 1 cloud_user cloud_user 230 Dec 5 15:20 ca-config.json
-rw-rw-r-- 1 cloud_user cloud_user 211 Dec 5 15:24 ca-csr.json
-rw-rw-r-- 1 cloud_user cloud_user 1367 Dec 5 15:25 ca.pem**
-rw------- 1 cloud_user cloud_user 1675 Dec 5 15:25 ca-key.pem**
-rw-r--r-- 1 cloud_user cloud_user 1005 Dec 5 15:25 ca.csr
ca-key.pem is private cert for ca
ca.pem is public cert for ca
ca.pem will need to be placed in multiple locations so those components can have the public ca cert to authenticate entities attempting to authenticate to it.
generate client certificates:
Now that you have provisioned a certificate authority for the Kubernetes cluster, you are ready to begin generating certificates.
The first set of certificates you will need to generate consists of the client certificates used by various Kubernetes components.
In this lesson, we will generate the following client certificates: admin (for admin access), kubelet (kubernetes client on worker nodes)(one for each worker node), kube-controller-manager (on the controller nodes), kube-proxy (on the worker nodes), and kube-scheduler (on the controller nodes). After completing this lesson, you will have the client certificate files which you will need later to set up the cluster.
Here are the commands used in the demo. The command blocks surrounded by curly braces can be entered as a single command:
cd ~/kthw
Admin Client certificate:
{
cat > admin-csr.json << EOF
{
"CN": "admin",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:masters",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
*this is why admin user can access cluster as admin; because O in the csr is system:masters.
$ kubectl get clusterrolebindings cluster-admin -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
creationTimestamp: "2020-01-01T00:04:59Z"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: cluster-admin
resourceVersion: "112"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin
uid: 5984b62e-2c2a-11ea-832a-06bd4e919624
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:masters
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare admin
}
Kubelet Client certificates. Be sure to enter your actual cloud server values for all four of the variables at the top:
WORKER0_HOST=<Public hostname of your first worker node cloud server>
WORKER0_IP=<Private IP of your first worker node cloud server>
WORKER1_HOST=<Public hostname of your second worker node cloud server>
WORKER1_IP=<Private IP of your second worker node cloud server>
{
cat > ${WORKER0_HOST}-csr.json << EOF
{
"CN": "system:node:${WORKER0_HOST}",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:nodes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=${WORKER0_IP},${WORKER0_HOST} \
-profile=kubernetes \
${WORKER0_HOST}-csr.json | cfssljson -bare ${WORKER0_HOST}
cat > ${WORKER1_HOST}-csr.json << EOF
{
"CN": "system:node:${WORKER1_HOST}",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:nodes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=${WORKER1_IP},${WORKER1_HOST} \
-profile=kubernetes \
${WORKER1_HOST}-csr.json | cfssljson -bare ${WORKER1_HOST}
}
**so this creates a **.csr, **.pem and **-key.pem files from **-csr.json on running the cfssl gencert command.
Controller Manager Client certificate:
{
cat > kube-controller-manager-csr.json << EOF
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:kube-controller-manager",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
}
Kube Proxy Client certificate:
{
cat > kube-proxy-csr.json << EOF
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:node-proxier",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare kube-proxy
}
Kube Scheduler Client Certificate:
{
cat > kube-scheduler-csr.json << EOF
{
"CN": "system:kube-scheduler",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:kube-scheduler",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-scheduler-csr.json | cfssljson -bare kube-scheduler
}
**the certs for teh worker nodes will be used by kubelet on the individual worker nodes.
** these are all client side: kubelet on worker nodes, kube-controller-manager, kube-proxy ad kube-scheduler.
**as the kubelet is not installed on the controller nodes, ithey both will not show up on the kubectl get nodes
at the end. to have them show up as master nodes in the output, im going to install and configure kubelet on the controller nodes as well.
so here, im creating certs for kubelet on the controller nodes too. and will setup config for anything that needs to be done on the controller nodes
so as to have kubelet setup and working correctly on them as well.
generating kubernetes api server cert:
*10.32.0.1 - customary ip that pods in the cluster might use.
We have generated all of the the client certificates our Kubernetes cluster will need, but we also need a server certificate for the Kubernetes API.
In this lesson, we will generate one, signed with all of the hostnames and IPs that may be used later in order to access the Kubernetes API.
After completing this lesson, you will have a Kubernetes API server certificate in the form of two files called kubernetes-key.pem and kubernetes.pem.
Here are the commands used in the demo. Be sure to replace all the placeholder values in CERT_HOSTNAME with their real values from your cloud servers:
cd ~/kthw
CERT_HOSTNAME=10.32.0.1,<controller node 1 Private IP>,<controller node 1 hostname>,<controller node 2 Private IP>,<controller node 2 hostname>,<API load balancer Private IP>,<API load balancer hostname>,127.0.0.1,localhost,kubernetes.default
{
cat > kubernetes-csr.json << EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=${CERT_HOSTNAME} \
-profile=kubernetes \
kubernetes-csr.json | cfssljson -bare kubernetes
}
generating the service account key pair:
*kube needs a key pair to sign tokens created for service accounts.
Kubernetes provides the ability for service accounts to authenticate using tokens. It uses a key-pair to provide signatures for those tokens.
In this lesson, we will generate a certificate that will be used as that key-pair. After completing this lesson, you will have a certificate ready to be used as a service account key-pair in the form of two files: service-account-key.pem and service-account.pem.
Here are the commands used in the demo:
cd ~/kthw
{
cat > service-account-csr.json << EOF
{
"CN": "service-accounts",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
service-account-csr.json | cfssljson -bare service-account
}
distribute the cert files:
Now that all of the necessary certificates have been generated, we need to move the files onto the appropriate servers.
In this lesson, we will copy the necessary certificate files to each of our cloud servers. After completing this lesson, your controller and worker nodes should each have the certificate files which they need.
Here are the commands used in the demo. Be sure to replace the placeholders with the actual values from from your cloud servers.
Move certificate files to the worker nodes:
scp ca.pem <worker 1 hostname>-key.pem <worker 1 hostname>.pem user@<worker 1 public IP>:~/
scp ca.pem <worker 2 hostname>-key.pem <worker 2 hostname>.pem user@<worker 2 public IP>:~/
Move certificate files to the controller nodes:
scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem user@<controller 1 public IP>:~/
scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem user@<controller 2 public IP>:~/
**admin, scheduler and controller-manager and kube-proxy havent been moved anywhere; they would be used locally to generate kubeconfigs.
kubeconfigs ??
a kubernetes configuration file is a file that stores information about clusters, users, namespaces and authentication mechanisms.
contains configuration data needed to connect to and interact with one or more kubernetes clusters.
that kubectl cofnig view output or ~/.kube/config file.
generating kubeconfigs for the cluster:
using command kubectl
we are generating kubeconfigs for the individual services to reach api server on the load balancer.
The next step in building a Kubernetes cluster the hard way is to generate kubeconfigs which will be used by the various services that will make up the cluster. In this lesson, we will generate these kubeconfigs.
After completing this lesson, you should have a set of kubeconfigs which you will need later in order to configure the Kubernetes cluster.
Here are the commands used in the demo. Be sure to replace the placeholders with actual values from your cloud servers.
Create an environment variable to store the address of the Kubernetes API, and set it to the private IP of your load balancer cloud server:
KUBERNETES_ADDRESS=<load balancer private ip>
Generate a kubelet kubeconfig for each worker node:
for instance in <worker 1 hostname> <worker 2 hostname>; do
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_ADDRESS}:6443 \
--kubeconfig=${instance}.kubeconfig
kubectl config set-credentials system:node:${instance} \
--client-certificate=${instance}.pem \
--client-key=${instance}-key.pem \
--embed-certs=true \
--kubeconfig=${instance}.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:node:${instance} \
--kubeconfig=${instance}.kubeconfig
kubectl config use-context default --kubeconfig=${instance}.kubeconfig
done
Generate a kube-proxy kubeconfig:
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_ADDRESS}:6443 \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials system:kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
}
**getting kubelet files for controllers too here.
Generate a kube-controller-manager kubeconfig:
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \ **i was using api load balancer ip here; thats why i had to setup load balancer nginx backend before i could see the components working !!
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.pem \
--client-key=kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
}
Generate a kube-scheduler kubeconfig:
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \ **i was using api load balancer ip here; thats why i had to setup load balancer nginx backend before i could see the components working !!
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.pem \
--client-key=kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
}
Generate an admin kubeconfig:
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=admin.kubeconfig
**i was using api load balancer ip here; thats why i had to setup load balancer nginx backend before i could see the components working !!
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem \
--embed-certs=true \
--kubeconfig=admin.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=admin \
--kubeconfig=admin.kubeconfig
kubectl config use-context default --kubeconfig=admin.kubeconfig
}
all kubeconfig files generated:
$ ls -ltr *kubeconfig*
-rw------- 1 cloud_user cloud_user 6572 Dec 5 16:32 chaitanyah3683c.mylabserver.com.kubeconfig *worker node 1
-rw------- 1 cloud_user cloud_user 6560 Dec 5 16:41 chaitanyah3685c.mylabserver.com.kubeconfig *worker node 2
-rw------- 1 cloud_user cloud_user 6372 Dec 5 16:52 kube-proxy.kubeconfig
-rw------- 1 cloud_user cloud_user 6446 Dec 5 16:55 kube-controller-manager.kubeconfig
-rw------- 1 cloud_user cloud_user 6400 Dec 5 16:57 kube-scheduler.kubeconfig
-rw------- 1 cloud_user cloud_user 6320 Dec 5 16:59 admin.kubeconfig
distributing kubeconfig files:
Now that we have generated the kubeconfig files that we will need in order to configure our Kubernetes cluster, we need to make sure that each cloud server has a copy of the kubeconfig files that it will need.
In this lesson, we will distribute the kubeconfig files to each of the worker and controller nodes so that they will be in place for future lessons. After completing this lesson, each of your worker and controller nodes should have a copy of the kubeconfig files it needs.
Here are the commands used in the demo. Be sure to replace the placeholders with the actual values from your cloud servers.
Move kubeconfig files to the worker nodes:
scp <worker 1 hostname>.kubeconfig kube-proxy.kubeconfig user@<worker 1 public IP>:~/
scp <worker 2 hostname>.kubeconfig kube-proxy.kubeconfig user@<worker 2 public IP>:~/
Move kubeconfig files to the controller nodes:
scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig user@<controller 1 public IP>:~/
scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig user@<controller 2 public IP>:~/
data encryption configuration in kubernetes:
One important security practice is to ensure that sensitive data is never stored in plain text. Kubernetes offers the ability to encrypt sensitive data when it is stored.
However, in order to use this feature it is necessary to provide Kubernetes with a data encrpytion config containing an encryption key.
can encrypt sensitive data at rest
secrets are dencrypted so that they are never stored on disc in plain text
we will generate an encryption key and put it into a configuration file. will tehn copy the file to the kubernetes controller servers.
In order to make use of Kubernetes' ability to encrypt sensitive data at rest, you need to provide Kubernetes with an encrpytion key using a data encrpyiton config file.
This lesson walks you through the process of creating a encryption key and storing it in the necessary file,
as well as showing how to copy that file to your Kubernetes controllers.
After completing this lesson, you should have a valid Kubernetes data encyption config file, and there should be a copy of that file on each of your Kubernetes controller servers.
Here are the commands used in the demo.
Generate the Kubernetes Data encrpytion config file containing the encrpytion key:
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
cat > encryption-config.yaml << EOF
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY}
- identity: {}
EOF
Copy the file to both controller servers:
scp encryption-config.yaml user@<controller 1 public ip>:~/
scp encryption-config.yaml user@<controller 2 public ip>:~/
$ ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
cloud_user@chaitanyah3686c:~/kube-har-way$ echo $ENCRYPTION_KEY
X5VxUUhQdJHBP0dhV862jxyvErVfCz/B1OxPZhkIV6U=
this generates a random string, cut to 32 chars and then encodes it into base 64.
etcd??
distributed key value store used to store data across a cluster of machines.
store data across a distributed cluster of machine and make sure the data is synchronized across all machines.
stores data about the state of the cluster and allows controllers to get synced data about the cluster.
no need on the worker node; only needed on the controller nodes.
creating etcd cluster:
Before you can stand up controllers for a Kubernetes cluster, you must first build an etcd cluster across your Kubernetes control nodes. This lesson provides a demonstration of how to set up an etcd cluster in preparation for bootstrapping Kubernetes.
After completing this lesson, you should have a working etcd cluster that consists of your Kubernetes control nodes.
Here are the commands used in the demo (note that these have to be run on both controller servers, with a few differences between them):
wget -q --show-progress --https-only --timestamping \
"https://github.com/coreos/etcd/releases/download/v3.3.5/etcd-v3.3.5-linux-amd64.tar.gz"
for kubernetes v1.17.0:
wget -q --show-progress --https-only --timestamping \
"https://github.com/coreos/etcd/releases/download/v3.3.5/etcd-v3.3.5-linux-amd64.tar.gz"
tar -xvf etcd-v3.3.5-linux-amd64.tar.gz
sudo mv etcd-v3.3.5-linux-amd64/etcd* /usr/local/bin/
sudo mkdir -p /etc/etcd /var/lib/etcd
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
Set up the following environment variables. Be sure you replace all of the <placeholder values> with their corresponding real values:
ETCD_NAME=<cloud server hostname>
INTERNAL_IP=$(curl http://169.254.169.254/latest/meta-data/local-ipv4) **this curl is simply a call to the aws api that returns the instance's private ip.
INITIAL_CLUSTER=<controller 1 hostname>=https://<controller 1 private ip>:2380,<controller 2 hostname>=https://<controller 2 private ip>:2380
Create the systemd unit file for etcd using this command. Note that this command uses the environment variables that were set earlier:
cat << EOF | sudo tee /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
ExecStart=/usr/local/bin/etcd \\
--name ${ETCD_NAME} \\
--cert-file=/etc/etcd/kubernetes.pem \\
--key-file=/etc/etcd/kubernetes-key.pem \\
--peer-cert-file=/etc/etcd/kubernetes.pem \\
--peer-key-file=/etc/etcd/kubernetes-key.pem \\
--trusted-ca-file=/etc/etcd/ca.pem \\
--peer-trusted-ca-file=/etc/etcd/ca.pem \\
--peer-client-cert-auth \\
--client-cert-auth \\
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
--advertise-client-urls https://${INTERNAL_IP}:2379 \\
--initial-cluster-token etcd-cluster-0 \\
--initial-cluster ${INITIAL_CLUSTER} \\
--initial-cluster-state new \\
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
Start and enable the etcd service:
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
You can verify that the etcd service started up successfully like so:
sudo systemctl status etcd
Use this command to verify that etcd is working correctly. The output should list your two etcd nodes:
sudo ETCDCTL_API=3 etcdctl member list \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.pem \
--cert=/etc/etcd/kubernetes.pem \
--key=/etc/etcd/kubernetes-key.pem
**LIKE SO IN THE OTHER CLUSTER TOO..
$ echo $INITIAL_CLUSTER '||' $INTERNAL_IP '||' $ETCD_NAME
chaitanyah3681c.mylabserver.com=https://172.31.46.223:2380,chaitanyah3682c.mylabserver.com=https://172.31.42.134:2380 || 172.31.42.134 || chaitanyah3682c.mylabserver.com
cloud_user@chaitanyah3682c:~$
* u want to run systemctl daemon-reload whenever a systemd file changes.
$ sudo ETCDCTL_API=3 etcdctl member list --endpoints=https://127.0.0.1:2379 --cacert=/etc/etcd/ca.pem --cert=/etc/etcd/kubernetes.pem --key=/etc/etcd/kubernetes-key.pem
3a9f545605563f4, started, chaitanyah3681c.mylabserver.com, https://172.31.46.223:2380, https://172.31.46.223:2379
80c3fb66249e63b, started, chaitanyah3682c.mylabserver.com, https://172.31.42.134:2380, https://172.31.42.134:2379
cloud_user@chaitanyah3681c:~$
kubernetes control plane ??
is a set of services that control the kubernetes cluster.
control plane makes global decisions about the cluster and detect and respond to cluster events
control plane components:
kube-apiserver
serves kubernetes api. allows for any interaction with the cluster
api is the interface to the cluster plane and inturn to the cluster
etcd
cluster datastore
kube-controller-manager
1 service that has a series of controllers that provide a wide range of functionality
kube-scheduler
schedules pods on available worker nodes
finding a node to run a pod on.
cloud-controller-manager
handles interaction with underlying cloud providers
provides integration points with underlying cloud services
control plane overview:
controller1 controller2
etcd etcd
kube-apiserver kube-apiserver
kube-controller-manager kube-controller-manager
kube-scheduler kube-scheduler
kube api load balancer (lb'in to kube-apiserver endpoints in controller 1 and 2)
installing kubernetes control plane binaries:
The first step in bootstrapping a new Kubernetes control plane is to install the necessary binaries on the controller servers. We will walk through the process of downloading and installing the binaries on both Kubernetes controllers.
This will prepare your environment for the lessons that follow, in which we will configure these binaries to run as systemd services.
You can install the control plane binaries on each control node like this:
sudo mkdir -p /etc/kubernetes/config
wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kube-apiserver" \
"https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kube-controller-manager" \
"https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kube-scheduler" \
"https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kubectl"
for v1.17.0:
wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kube-apiserver" \
"https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kube-controller-manager" \
"https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kube-scheduler" \
"https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl"
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
setting up kubernetes api server:
The Kubernetes API server provides the primary interface for the Kubernetes control plane and the cluster as a whole. When you interact with Kubernetes, you are nearly always doing it through the Kubernetes API server.
This lesson will guide you through the process of configuring the kube-apiserver service on your two Kubernetes control nodes. After completing this lesson, you should have a systemd unit set up to run kube-apiserver as a service on each Kubernetes control node.
You can configure the Kubernetes API server like so:
sudo mkdir -p /var/lib/kubernetes/
sudo cp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem \
encryption-config.yaml /var/lib/kubernetes/
Set some environment variables that will be used to create the systemd unit file. Make sure you replace the placeholders with their actual values:
INTERNAL_IP=$(curl http://169.254.169.254/latest/meta-data/local-ipv4)
CONTROLLER0_IP=<private ip of controller 0>
CONTROLLER1_IP=<private ip of controller 1>
Generate the kube-apiserver unit file for systemd:
cat << EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/audit.log \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.pem \\
--enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--enable-swagger-ui=true \\
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
--etcd-servers=https://$CONTROLLER0_IP:2379,https://$CONTROLLER1_IP:2379 \\
--event-ttl=1h \\
--experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
--kubelet-https=true \\
--runtime-config=api/all \\
--service-account-key-file=/var/lib/kubernetes/service-account.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
--v=2 \\
--kubelet-preferred-address-types=InternalIP,InternalDNS,Hostname,ExternalIP,ExternalDNS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
**here, when setting up for 1.16.0, i noticed that initializers admission controller was failing as it is not supported.
more details in here:
https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/
think there should not be spaces in the comma separated value for enable-admission-plugins.
removed spaces in between admission controllers and did a daemon-reload and apiserver was functioning.
used journalctl -xe to debug the issue.
--kubelet-preferred-address-types=InternalIP,InternalDNS,Hostname,ExternalIP,ExternalDNS **this is the only one thats added in lac hard way course. added to ensure kubelet works in all cases.
setting up kubernetes controller manager:
Now that we have set up kube-apiserver, we are ready to configure kube-controller-manager.
This lesson walks you through the process of configuring a systemd service for the Kubernetes Controller Manager.
After completing this lesson, you should have the kubeconfig and systemd unit file set up and ready to run the kube-controller-manager service on both of your control nodes.
You can configure the Kubernetes Controller Manager like so:
sudo cp kube-controller-manager.kubeconfig /var/lib/kubernetes/
Generate the kube-controller-manager systemd unit file:
cat << EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--address=0.0.0.0 \\
--cluster-cidr=10.200.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
--leader-elect=true \\
--root-ca-file=/var/lib/kubernetes/ca.pem \\
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--use-service-account-credentials=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
setting up kubernetes scheduler:
Now we are ready to set up the Kubernetes scheduler. This lesson will walk you through the process of configuring the kube-scheduler
systemd service. Since this is the last of the three control plane services that need to be set up in this section,
this lesson also guides you through through enabling and starting all three services on both control nodes.
Finally, this lesson shows you how to verify that your Kubernetes controllers are healthy and working so far. After completing this lesson,
you will have a basic, working, Kuberneets control plane distributed across your two control nodes.
You can configure the Kubernetes Sheduler like this.
Copy kube-scheduler.kubeconfig into the proper location:
sudo cp kube-scheduler.kubeconfig /var/lib/kubernetes/
Generate the kube-scheduler yaml config file.
cat << EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: componentconfig/v1alpha1 ** for later versions (since 1.13) this api has been deprecated. use kubescheduler.config.k8s.io/v1alpha1 instead.
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
leaderElection:
leaderElect: true
EOF
Create the kube-scheduler systemd unit file:
cat << EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
--config=/etc/kubernetes/config/kube-scheduler.yaml \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
**with 1.16.3; had to modify the kube-scheduler systemd file like so:
cat << EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=kube scheduler
Documentation=https://github.com/kuberentes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \
--authentication-kubeconfig=/var/lib/kubernetes/kube-scheduler.kubeconfig \
--authorization-kubeconfig=/var/lib/kubernetes/kube-scheduler.kubeconfig \
--kubeconfig=/var/lib/kubernetes/kube-scheduler.kubeconfig \
--leader-elect=true \
--v=2
Restart=on-failure
RestartSec=5