-
Notifications
You must be signed in to change notification settings - Fork 189
/
Copy pathEX280_study_notes.txt
536 lines (392 loc) · 18.5 KB
/
EX280_study_notes.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
######################################################################
## Tomas Nevar <[email protected]>
## Study notes for EX280 OpenShift Administration exam (RHEL 7)
######################################################################
## Exam objectives:
* OpenShift Container Platform general configuration and management.
- Understand and use the command line and web console.
- Create and delete projects.
- Import, export, and configure Kubernetes resources.
- Configure persistent registry storage.
- Examine resources and cluster status.
- View logs.
- Troubleshoot common problems.
* Docker image management.
- Understand and work with image registries.
- List images.
- Load images from archive files.
- Use image tags.
- Pull and push images.
* Users and Policies management.
- Create and delete users.
- Modify user passwords.
- Modify user and group permissions.
* Applications creation and management.
- Provision persistent application storage.
- Deploy applications using Source-to-Image (S2I).
- Use Git to configure applications.
- Edit and import application templates.
- Assemble an application from existing components.
- Deploy multi-container applications.
- Create containerized services.
- Create and edit external routes.
- Secure routes using TLS certificates.
* Monitoring and Tuning.
- Install and configure metrics.
- Limit resource usage.
- Scale applications to meet increased demand.
- Control pod placement across cluster nodes.
#---------------------------------------------------------------------
## *** Minishift Installation ***
$ wget https://github.com/minishift/minishift/releases/download/v1.34.1/minishift-1.34.1-linux-amd64.tgz
$ tar xf minishift-1.34.1-linux-amd64.tgz
$ export MINISHIFT_HOME=/mnt/images/minishift
$ minishift config set vm-driver virtualbox
$ minishift config set disk-size 32G
$ minishift config set memory 4096
$ minishift config set cpu 2
$ minishift config set insecure-registry 192.168.0.0/16
$ minishift config set openshift-version v3.11.0
$ minishift config view
$ minishift start
$ oc login -u system:admin
$ minishift -h
Minishift is a command-line tool that provisions and manages single-node OpenShift clusters optimized for development workflows.
Usage:
minishift [command]
Available Commands:
addons Manages Minishift add-ons.
completion Outputs minishift shell completion for the given shell (bash or zsh)
config Modifies Minishift configuration properties.
console Opens or displays the OpenShift Web Console URL.
delete Deletes the Minishift VM.
docker-env Sets Docker environment variables.
help Help about any command
hostfolder Manages host folders for the Minishift VM.
image Exports and imports container images.
ip Gets the IP address of the running cluster.
logs Gets the logs of the running OpenShift cluster.
oc-env Sets the path of the 'oc' binary.
openshift Interacts with your local OpenShift cluster.
profile Manages Minishift profiles.
services Manage Minishift services
setup Configures pre-requisites for Minishift on the host machine
ssh Log in to or run a command on a Minishift VM with SSH.
start Starts a local OpenShift cluster.
status Gets the status of the local OpenShift cluster.
stop Stops the running local OpenShift cluster.
update Updates Minishift to the latest version.
version Gets the version of Minishift.
Flags:
--alsologtostderr log to standard error as well as files
-h, --help help for minishift
--log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log_dir string If non-empty, write log files in this directory (default "")
--logtostderr log to standard error instead of files
--profile string Profile name (default "minishift")
--show-libmachine-logs Show logs from libmachine.
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
-v, --v Level log level for V logs. Level varies from 1 to 5 (default 1).
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
Use "minishift [command] --help" for more information about a command.
#---------------------------------------------------------------------
## *** OpenShift ***
## Installation:
$ sudo yum install atomic-openshift-utils
$ ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml
$ ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
## Post-installation:
$ oc login -u admin -p redhat --insecure-skip-tls-verify=true
$ oc get nodes --show-labels
$ oc get nodes -L region
$ ssh master.lab.example.com
$ sudo -i
$ oc adm policy add-cluster-role-to-user cluster-admin admin
## Manage an OpenShift Instance Using oc:
$ oc login -u admin -p redhat https://master.lab.example.com
$ oc get projects
$ oc project default
$ oc get nodes
$ oc describe master master.lab.example.com
$ oc get pods -o wide
$ oc describe pod docker-registry-1-klmn4
$ oc exec pod docker-registry-1-klmn4 uptime
$ oc rsh pod docker-registry-1-klmn4
$ oc status -v
$ oc get events
$ oc get all
$ oc export pod docker-registry-1-klmn4 -o json
$ oc export svc,dc docker-registry --as-template=docker-registry
$ oc export --help ;# see Example section
## Deploy an app:
$ oc new-app --name=hello -i php:7.0 http://services.lab.example.com/php-helloworld
$ docker build -t node-hello:latest .
$ docker images
$ docker tag a9861ee36be4 registry.lab.example.com/node-hello:latest
$ docker push registry.lab.example.com/node-hello:latest
$ oc delete all -l app=<node-hello>
$ oc get all
$ oc describe pod <hello-1-deploy>
$ oc get events --sort-by='.metadata.creationTimestamp'
$ oc get dc <hello> -o yaml
$ ssh root@node1
$ vim /etc/sysconfig/docker
$ oc rollout latest hellp
$ oc logs <hello-2-abcd>
$ pc expose service --hostname=hello.apps.lab.example.com <node-hello>
$ oc debug pod <PODNAME>
Accessing the Internal Registry:
$ oc login -u developer
$ oc whoami -t
tdRquJSA8LW5HMLPvuNqb6vMnwcXW4S1FUpOd2msCd4
$ minishift openshift registry
172.30.1.1:5000
$ ssh docker@minishift
[docker@minishift ~]$ docker login -u developer -p tdRquJSA8LW5HMLPvuNqb6vMnwcXW4S1FUpOd2msCd4 172.30.1.1:5000
Login Succeeded
minishift openshift config view
<registry_server>/<user_name>/<image_name>:<tag>
$ oc get svc -n default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
docker-registry ClusterIP 172.30.1.1 <none> 5000/TCP 18d
kubernetes ClusterIP 172.30.0.1 <none> 443/TCP 18d
router ClusterIP 172.30.203.145 <none> 80/TCP,443/TCP,1936/TCP 18d
docker tag localimage 172.30.43.173:5000/test/localimage
docker push 172.30.43.173:5000/test/localimage
oc new-app test/localimage --name=myapp
$ oc new-app \
--name=mysqldb \
-e MYSQL_USER=minitest \
-e MYSQL_PASSWORD=ropesaidcant \
-e MYSQL_ROOT_PASSWORD=ropesaidcant \
-e MYSQL_DATABASE=minitest \
openshift/mysql-55-centos7
$ oc describe pod mysqldb | grep -A 2 'Volumes'
Volumes:
mysqldb-volume-1:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
oc delete all -l app=mysqldb
$ oc login -u system:admin
$ oc get pv|head -n2
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mysqldb-volume 2Gi RWO Recycle Available 56m
$ oc login -u developer
$ oc set volume dc/mysqldb --add --overwrite --name=mysqldb-volume-1 -t pvc \
--claim-name=mysqldb-volume \
--claim-size=2Gi \
--claim-mode=RWO
$ oc describe pod mysqldb | \grep -E -A 2 'Volumes|ClaimName'
Volumes:
mysqldb-volume-1:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
--
Volumes:
deployer-token-wxpvp:
Type: Secret (a volume populated by a Secret)
--
Volumes:
mysqldb-volume-1:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysqldb-volume
ReadOnly: false
default-token-m44l4:
$ oc get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysqldb-volume Bound mysqldb-volume 2Gi RWO 33s
*** Labels
$ oc get nodes -L region
$ oc label node node2.lab.example.com region=apps --overwrite=true
$ oc get dc/hello -o yaml > dc.yml
spec:
nodeSelector:
region: applications
$ oc replace -f dc.yml
$ oc adm manage-node --schedulable=false node2.lab.example.com
$ oc adm drain node2.lab.example.com --delete-local-data
Service account:
$ oc create serviceaccount phpmyadmin-account
$ oc adm policy add-scc-to-user anyuid -z phpmyadmin-account
Docker import local image:
$ docker load -i phpmyadmin-latest.tar
$ docker images
none none none
$ docker tag 93d0d7db5ce2 docker-registry-default.apps.lab.example.com/schedule-is/phpmyadmin:4.7
$ docker images
$ TOKEN=$(oc whoami -t)
$ docker login -u developer -p ${TOKEN} docker-registry-default.apps.lab.example.com
# Certificate signed by unknown authority:
$ scp registry.crt root@master:/etc/origin/master/registry.crt
$ sudo cp registry.crt /etc/pki/ca-trust/source/anchors/docker-registry-default.apps.lab.example.com.crt
$ sudo update-ca-trust
$ sudo systemctl restart docker
<<RUN DOCKER LOGIN AGAIN>>>
$ docker push docker-registry-default.apps.lab.example.com/schedule-is/phpmyadmin:4.7
$ oc expose service version --hostname=version.apps.lab.example.com
$ oc autoscale dc/myapp --min 1 --max 10 --cpu-percent=80
*** Installing the Metrics Subsystem
$ docker-registry-cli registry.lab.example.com search metrics-cassandra ssl
$ docker-registry-cli registry.lab.example.com search metrics-hawkular-metrics ssl
$ docker-registry-cli registry.lab.example.com search metrics-heapster ssl
$ docker-registry-cli registry.lab.example.com search ose-recycler ssl
ssh root@services cat /etc/exports.d/openshift-ansible.exports
*** Managing and Monitoring OpenShift Container Platform
Resource Requests and Limits for Pods
A pod definition can include both resource requests and resource limits:
* Resource requests
Used for scheduling, and indicate that a pod is not able to run with less than the specified amount of compute resources. The scheduler tries to find a node with sufficient compute resources to satisfy the pod requests.
* Resource limits
Used to prevent a pod from using up all compute resources from a node. The node that runs a pod configures the Linux kernel cgroups feature to enforce the resource limits for the pod.
$ oc get limits
$ oc describe limits
$ oc get quota
$ oc describe quota
$ oc describe node localhost|grep -A5 Allocated
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 300m (15%) 0 (0%)
memory 612Mi (15%) 0 (0%)
$ oc login -u system:admin
$ oc project default
$ oc get dc
NAME REVISION DESIRED CURRENT TRIGGERED BY
docker-registry 1 1 1 config
router 1 1 1 config
$ oc get dc docker-registry -o yaml
*** Readiness/liveness
* Liveness Probe
A liveness probe determines whether or not an application running in a container is in ahealthystate. If the liveness probe returns detects an unhealthy state, OpenShift kills the pod and tries to redeploy it again. Developers can set a liveness probe by configuring thetemplate.spec.containers.livenessprobestanza of a pod configuration.
* Readiness Probe
A readiness probe determines whether or not a container is ready to serve requests. If the readiness probe returns a failed state, OpenShift removes the container's IP address from the endpoints of all services. Developers can use readiness probes to signal to OpenShift that even though a container is running, it should not receive any traffic from a proxy. Developers can set a readiness probe by configuring thetemplate.spec.containers.readinessprobestanza of a pod configuration.
$ curl http://probe.apps.lab.example.com/health
OK
$ curl http://probe.apps.lab.example.com/ready
READY
$ oc get events --sort-by='.metadata.creationTimestamp' | grep "probe failed"
DOCUMENTATION
Cluster Administration, chapter 6.2.2 Disabling Self-provisioning
https://access.redhat.com/documentation/en-us/openshift_container_platform/3.9/html/cluster_administration/admin-guide-managing-projects#disabling-self-provisioning
$ oc adm policy remove-cluster-role-from-group self-provisioner system:authenticated system:authenticated:oauth
Cluster Administration, chapter 10.3. Managing role bindings
https://access.redhat.com/documentation/en-us/openshift_container_platform/3.9/html/cluster_administration/admin-guide-manage-rbac#managing-role-bindings
$ oc policy add-role-to-user edit developer
Cluster Administration, chapter 17 Setting Quotas
https://access.redhat.com/documentation/en-us/openshift_container_platform/3.9/html/cluster_administration/admin-guide-quota
$ oc create quota <name> --hard=pods=1
Installation and Configuration, chapter 4.2.18. Using Wildcard Routes (for a Subdomain)
https://access.redhat.com/documentation/en-us/openshift_container_platform/3.9/html-single/installation_and_configuration/index#using-wildcard-routes
$ oc scale dc/router --replicas=0
$ oc set env dc/router ROUTER_ALLOW_WILDCARD_ROUTES=true
$ oc scale dc/router --replicas=1
$ oc get dc/router -o yaml|grep -i wildcard
- name: ROUTER_ALLOW_WILDCARD_ROUTES
Installation and Configuration, chapter 4.2.10. Adding a Node Selector to a Deployment Configuration
https://access.redhat.com/documentation/en-us/openshift_container_platform/3.9/html-single/installation_and_configuration/index#adding-nodeselector-to-a-deployment
$ oc label node 10.254.254.28 "router=first"
$ oc edit dc <deploymentConfigName>
...
template:
metadata:
creationTimestamp: null
labels:
router: router1
spec:
nodeSelector:
router: "first"
...
Installation and Configuration, chapter 4.2.13. Customizing the Default Routing Subdomain
https://access.redhat.com/documentation/en-us/openshift_container_platform/3.9/html-single/installation_and_configuration/index#customizing-the-default-routing-subdomain
The default routing subdomain is defined in the /etc/origin/master/master-config.yaml file by default.
$ ssh master
$ find / -name master-config.yaml -type f -exec grep subdomain {} \;
Installation and Configuration, 4.2.15. Using Wildcard Certificates
https://access.redhat.com/documentation/en-us/openshift_container_platform/3.9/html-single/installation_and_configuration/index#using-wildcard-certificates
(https://www.arctiq.ca/our-blog/2016/11/5/changing-the-default-ssl-certificate-on-your-openshift-routers/)
$ oc export secrets router-certs -o yaml > backup.yaml
$ oc delete secret router-certs
$ oc secrets new router-certs tls.crt=$NEW_PEM tls.key=$NEW_PRIVATE_KEY --type='kubernetes.io/tls' --confirm
$ oc rollout latest dc/router
$ CA=/etc/origin/master
$ oc adm ca create-server-cert --signer-cert=$CA/ca.crt \
--signer-key=$CA/ca.key --signer-serial=$CA/ca.serial.txt \
--hostnames='*.cloudapps.example.com' \
--cert=cloudapps.crt --key=cloudapps.key
$ cat cloudapps.crt cloudapps.key $CA/ca.crt > cloudapps.router.pem
$ oc adm router --default-cert=cloudapps.router.pem --service-account=router
$ CA=/var/lib/minishift/base/node
Installation and Configuration, chapter 24.2.2. Provisioning
https://access.redhat.com/documentation/en-us/openshift_container_platform/3.9/html-single/installation_and_configuration/index#nfs-provisioning
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
nfs:
path: /tmp
server: 172.17.0.2
persistentVolumeReclaimPolicy: Recycle
Developer Guide, chapter 13.7. Importing Tag and Image Metadata
https://access.redhat.com/documentation/en-us/openshift_container_platform/3.9/html-single/developer_guide/index#importing-tag-and-image-metadata
$ oc import-image <image_stream_name>[:<tag>] --from=<docker_image_repo> --confirm
$ oc login -u admin -p redhat
$ oc create -f mysql-pv.yaml
$ oc get pv
$ oc get events
$ oc rollout cancel dc/hello
$ oc edit quota <name>
$ oc logs -f bc/<name>
$ oc logs -f dc/<name>
$ oc delete all -f app=<name>
$ oc apply -n openshift -f nodejs-mysql-template.yaml
DOCKER
$ docker build -t todoapp/todoui .
$ docker tag todoapp/todoui:latest registry.lab.example.com/todoapp/todoui:latest
$ docker push registry.lab.example.com/todoapp/todoui:latest
$ oc whoami -c
$ oc delete is <name>
$ sudo yum install python-flask python-requests
$ git clone https://github.com/vivekjuneja/docker_registry_cli
$ cd docker_registry_cli
$ python ./browser.py 172.30.1.1:5000 list all
RECREATE REGISTRY FROM IMAGE
$ oc get dc
NAME REVISION DESIRED CURRENT TRIGGERED BY
docker-registry 1 1 1 config
router 1 1 1 config
$ oc describe dc/docker-registry|grep Image
Image: openshift/origin-docker-registry:v3.11.0
$ oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
docker-registry ClusterIP 172.30.1.1 <none> 5000/TCP 24d
kubernetes ClusterIP 172.30.0.1 <none> 443/TCP 24d
router ClusterIP 172.30.203.145 <none> 80/TCP,443/TCP,1936/TCP 24d
$ oc get serviceaccounts
NAME SECRETS AGE
builder 2 24d
default 2 24d
deployer 2 24d
pvinstaller 2 24d
registry 2 24d
router 2 24d
$ oc get clusterrolebindings|grep registry
registry-registry-role
$ oc delete all -l docker-registry
pod "docker-registry-1-5tp4r" deleted
replicationcontroller "docker-registry-1" deleted
service "docker-registry" deleted
deploymentconfig.apps.openshift.io "docker-registry" deleted
$ oc delete serviceaccounts/registry
$ oc delete clusterrolebindings/registry-registry-role
$ oc adm registry --images=openshift/origin-docker-registry:v3.11.0
$ minishift openshift registry
172.30.167.162:5000
$ oc describe dc/router|grep Image
Image: openshift/origin-haproxy-router:v3.11.0
$ oc get nodes --show-labels
$ oc label node localhost region=homelab
$ oc get nodes -L regions