如何參與 CNCF 的 Kubernetes 項目呢?https://github.com/tanjunchen/ParticipateCommunity
Kubernetes CKAD 練習
# 核心概念(13%)
# 多容器 Pod(10%)
# Pod 設計(20%)
# 狀態持久性(8%)
# 配置(18%)
# 可觀察性(18%)
# 服務和網絡(13%)
# 瞭解 Kubernetes API 原語,創建和配置基本 Pod
1. 列出集羣中的所有命名空間
kubectl get namespaces
kubectl get ns
2. 列出所有命名空間中的所有 Pod
kubectl get po --all-namespaces
3. 列出特定命名空間中的所有 Pod
kubectl get pod -n kube-system
kubectl get pod -n 命名空間名稱
4. 列出特定命名空間中的所有 Service
kubectl get svc --all-namespaces
kubectl get svc -n 命名空間名稱
kubectl get svc -n default
5. 用 json 路徑表達式列出所有顯示名稱和命名空間的 Pod
kubectl get pods -o=jsonpath="{. items[*]['metadata. name','metadata. namespace']}" --all-namespaces --sort-by=metadata. name
kubectl get pods -o=jsonpath="{. items[*]['metadata. name','metadata. namespace']}" --all-namespaces
6. 在默認命名空間中創建一個 Nginx Pod,並驗證 Pod 是否正在運行
kubectl run nginx --image=nginx
以上命令
kubectl run --generator=deployment/apps. v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead
kubectl run nginx --image=nginx --restart=Never
會直接產生 nginx 後面不會產生隨機數
7. 使用 yaml 文件創建相同的 Nginx Pod
kubectl run nginx --image=nginx --restart=Never --dry-run -o yaml > nginx. yaml
nginx. yaml 爲
```
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
```
8. 輸出剛創建的 Pod 的 yaml 文件
kubectl get po nginx -o yaml
9. 輸出剛創建的 Pod 的 yaml 文件,並且其中不包含特定於集羣的信息
kubectl get po nginx -o yaml --export
```
Flag --export has been deprecated, This flag is deprecated and will be removed in future.
apiVersion: v1
kind: Pod
metadata:
annotations:
cni. projectcalico. org/podIP: 10. 244. 1. 4/32
cni. projectcalico. org/podIPs: 10. 244. 1. 4/32
creationTimestamp: null
labels:
run: nginx
name: nginx
selfLink: /api/v1/namespaces/default/pods/nginx
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes. io/serviceaccount
name: default-token-6t2vf
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: k8s-node01
priority: 0
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node. kubernetes. io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node. kubernetes. io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-6t2vf
secret:
defaultMode: 420
secretName: default-token-6t2vf
status:
phase: Pending
qosClass: BestEffort
```
10. 獲取剛剛創建的 Pod 的完整詳細信息
kubectl describe pod nginx
```
Name: nginx
Namespace: default
Priority: 0
Node: k8s-node01/192. 168. 17. 151
Start Time: Sun, 05 Jan 2020 18:57:52 -0800
Labels: run=nginx
Annotations: cni. projectcalico. org/podIP: 10. 244. 1. 4/32
cni. projectcalico. org/podIPs: 10. 244. 1. 4/32
Status: Running
IP: 10. 244. 1. 4
IPs:
IP: 10. 244. 1. 4
Containers:
nginx:
Container ID: docker://3d9b8b7aba7c10b3f1fbdd470dee702bc5c0e70a46157674574b3b20bd629335
Image: nginx
Image ID: docker-pullable://nginx@sha256:b2d89d0a210398b4d1120b3e3a7672c16a4ba09c2c4a0395f18b9f7999b768f2
Port: <none>
Host Port: <none>
State: Running
Started: Sun, 05 Jan 2020 18:57:57 -0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes. io/serviceaccount from default-token-6t2vf (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-6t2vf:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-6t2vf
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node. kubernetes. io/not-ready:NoExecute for 300s
node. kubernetes. io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 14m default-scheduler Successfully assigned default/nginx to k8s-node01
Normal Pulling 14m kubelet, k8s-node01 Pulling image "nginx"
Normal Pulled 14m kubelet, k8s-node01 Successfully pulled image "nginx"
Normal Created 14m kubelet, k8s-node01 Created container nginx
Normal Started 14m kubelet, k8s-node01 Started container nginx
```
11. 刪除剛創建的 Pod
kubectl delete pod nginx
kubectl delete -f yaml 文件
12. 強制刪除剛創建的 Pod
kubectl delete po nginx --grace-period=0 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "nginx" force deleted
13. 創建版本爲 1. 17. 4 的 Nginx Pod,並將其暴露在端口 80 上
kubectl run nginx --image=nginx:1. 17. 4 --restart=Never --port=80
kubectl run nginx --image=nginx:1. 17. 4 --restart=Never --port=80 --dry-run -o yaml
```
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx:1. 17. 4
name: nginx
ports:
- containerPort: 80
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
```
14. 將剛創建的容器的鏡像更改爲 1. 15-alpine,並驗證該鏡像是否已更新
kubectl set image pod/nginx nginx=nginx:1. 15-alpine
kubectl edit po nginx
kubectl get po nginx -w
15. 對於剛剛更新的 Pod,將鏡像版本改回 1. 17. 1,並觀察變化
kubectl set image pod/nginx nginx=nginx:1. 17. 1
kubectl edit po nginx
kubectl get po nginx -w
16. 在不用 describe 命令的情況下檢查鏡像版本
kubectl get po nginx -o=jsonpath='{. spec. containers[]. image}{"\n"}'
17. 創建 Nginx Pod 並在 Pod 上執行簡單的 shell
kubectl run nginx --image=nginx --restart=Never
kubectl exec -it nginx /bin/bash
18. 獲取剛剛創建的 Pod 的 IP 地址
kubectl get po nginx -o wide
19. 創建一個 busybox Pod,在創建它時運行命令 ls 並檢查日誌
kubectl run busybox --image=busybox --restart=Never -- ls
kubectl logs busybox
```
bin
dev
etc
home
proc
root
sys
tmp
usr
var
```
20. 如果 Pod 崩潰了,請檢查 Pod 的先前日誌
kubectl logs busybox -p
21. 用命令 sleep 3600 創建一個 busybox Pod
kubectl run busybox --image=busybox --restart=Never -- /bin/sh -c "sleep 3600"
22. 檢查 busybox Pod 中 Nginx Pod 的連接
kubectl get pod nginx -o wide
kubectl exec -it busybox -- wget -o- 上述命令列出的 ip 地址
23. 創建一個能回顯消息“How are you”的 busybox Pod,並手動將其刪除
kubectl run busybox --image=busybox --restart=Never -it -- echo "How are you"
kubectl delete po busybox
24. 創建一個 Nginx Pod 並列出具有不同複雜度(verbosity)的 Pod
kubectl run nginx --image=nginx --restart=Never --port=80
kubectl get po nginx --v=7
```
I0105 20:54:25.780863 118442 loader.go:375] Config loaded from file: /home/k8s-master/.kube/config
I0105 20:54:25.790538 118442 round_trippers.go:420] GET https://192.168.17.150:6443/api/v1/namespaces/default/pods/nginx
I0105 20:54:25.790575 118442 round_trippers.go:427] Request Headers:
I0105 20:54:25.790584 118442 round_trippers.go:431] Accept: application/json;as=Table;v=v1beta1;g=meta.k8s.io, application/json
I0105 20:54:25.790591 118442 round_trippers.go:431] User-Agent: kubectl/v1.16.3 (linux/amd64) kubernetes/b3cbbae
I0105 20:54:25.797321 118442 round_trippers.go:446] Response Status: 200 OK in 6 milliseconds
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 3 91m
```
25. 使用自定義列 PODNAME 和 PODSTATUS 列出 Nginx Pod
kubectl get po nginx -o=custom-columns="POD_NAME:.metadata.name,POD_STATUS:.status.containerStatuses"
```
nginx [map[containerID:docker://9ff65fc8e8771aa9bc069735a7d9bffa915737cdc8da2f491e330c9c11082ebb image:nginx:latest imageID:docker-pullable://nginx@sha256:b2d89d0a210398b4d1120b3e3a7672c16a4ba09c2c4a0395f18b9f7999b768f2 lastState:map[terminated:map[containerID:docker://ade08d0814b2073079bbad9ebf51b3a3fa2af2df4a411f99b255a78d3b669eeb exitCode:0 finishedAt:2020-01-06T03:34:31Z reason:Completed startedAt:2020-01-06T03:31:59Z]] name:nginx ready:true restartCount:3 started:true state:map[running:map[startedAt:2020-01-06T03:34:31Z]]]]
```
26. 列出所有按名稱排序的 Pod
kubectl get po --sort-by=.metadata.name
27. 列出所有按創建時間排序的 Pod
kubectl get po --sort-by=.metadata.creationTimestamp
28. 用“ls; sleep 3600;”“echo Hello World; sleep 3600;”及“echo this is the third container; sleep 3600”三個命令創建一個包含三個 busybox 容器的 Pod,並觀察其狀態
sudo vim multi-containers.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: busybox
name: busybox
spec:
containers:
- args:
- bin/sh
- -c
- ls; sleep 3600
image: busybox
name: busybox1
resources: {}
- args:
- bin/sh
- -c
- echo Hello World;sleep 3600
image: busybox
name: busybox2
resources: {}
- args:
- bin/sh
- -c
- echo this is third containers;sleep 3600
image: busybox
name: busybox3
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
29. 檢查剛創建的每個容器的日誌
kubectl get po
kubectl logs busybox -c busybox1
kubectl logs busybox -c busybox2
kubectl logs busybox -c busybox3
30. 檢查第二個容器 busybox2 的先前日誌(如果有)
kubectl logs busybox -c busybox2 --previous
31. 在上述容器的第三個容器 busybox3 中運行命令 ls
kubectl exec busybox -c busybox3 -- ls
32. 顯示以上容器的 metrics,將其放入 file.log 中並進行驗證
kubectl top pod busybox --containers > file.log && cat file.log
33. 用主容器 busybox 創建一個 Pod,並執行“while true; do echo ‘Hi I am from Main container’ >> /var/log/index.html; sleep 5; done”,並帶有暴露在端口 80 上的 Nginx 鏡像的 sidecar 容器。用 emptyDir Volume 將該卷安裝在 /var/log 路徑(用於 busybox)和 /usr/share/nginx/html 路徑(用於nginx容器)。驗證兩個容器都在運行。
kubectl run muilti-containers-pod --image=busybox --restart=Never --dry-run -o yaml > nulti-containers-pod.yaml
```
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: muilti-containers-pod
name: muilti-containers-pod
spec:
volumes:
- name: var-logs
emptyDir: {}
containers:
- image: busybox
command: ["/bin/sh"]
args: ["-c", "while true; do echo 'Hi I am from Main container' >> /var/log/index.html; sleep 5; done"]
name: main-containers
resources: {}
volumeMounts:
- name: var-logs
mountPath: /var/log
- image: nginx
name: sidercar-container
resources: {}
ports:
- containerPort: 80
volumeMounts:
- name: var-logs
mountPath: /usr/share/nginx/html
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
```
kubectl create -f multi-containers-pod.yaml
34. 進入兩個容器並驗證 main.txt 是否存在,並用 curl localhost 從 sidecar 容器中查詢 main. txt
kubectl exec -it muilti-containers-pod -c main-container -- sh
cat var/log/index.html
kubectl exec -it muilti-containers-pod -c sidercar-container -- sh
cat /usr/share/nginx/html/index.html
35. 獲取帶有標籤信息的 Pod
kubectl get po --show-labels
36. 創建 5 個 Nginx Pod,其中兩個標籤爲 env = prod,另外三個標籤爲 env = dev
kubectl run nginx-dev1 --image=nginx --restart=Never --labels=env=dev
kubectl run nginx-dev2 --image=nginx --restart=Never --labels=env=dev
kubectl run nginx-dev3 --image=nginx --restart=Never --labels=env=dev
kubectl run nginx-pro1 --image=nginx --restart=Never --labels=env=pro
kubectl run nginx-pro2 --image=nginx --restart=Never --labels=env=pro
37. 確認所有 Pod 都使用正確的標籤創建
kubectl get po --show-labels
```
nginx-dev1 1/1 Running 0 2m2s env=dev
nginx-dev2 1/1 Running 0 63s env=dev
nginx-dev3 1/1 Running 0 57s env=dev
nginx-pro1 1/1 Running 0 52s env=pro
nginx-pro2 1/1 Running 0 46s env=pro
```
38. 獲得帶有標籤 env=dev 的 Pod
kubectl get pods -l env=dev
```
nginx-dev1 1/1 Running 0 4m55s
nginx-dev2 1/1 Running 0 3m56s
nginx-dev3 1/1 Running 0 3m50s
```
39. 獲得帶標籤 env=dev 的 Pod 並輸出標籤
kubectl get pods -l env=dev --show-labels
40. 獲得帶有標籤 env=pro 的 Pod
kubectl get pods -l env=pro
```
NAME READY STATUS RESTARTS AGE
nginx-pro1 1/1 Running 0 8m25s
nginx-pro2 1/1 Running 0 8m19s
```
41. 獲得帶標籤 env=prod 的 Pod 並輸出標籤
kubectl get pods -l env=pro --show-labels
42. 獲取帶有標籤 env 的 Pod
kubectl get po -L env
43. 獲得帶標籤 env=dev、env=pro 的 Pod
kubectl get po -l 'env in (dev,pro)'
44. 獲取帶有標籤 env=dev 和 env=pro 的 Pod 並輸出標籤
kubectl get po -l 'env in (dev,pro)' --show-labels
45. 將其中一個容器的標籤更改爲 env=uat 並列出所有要驗證的容器
kubectl label pod/nginx-pro1 env=aa --overwrite
kubectl get pods --show-labels
46. 刪除剛纔創建的 Pod 標籤,並確認所有標籤均已刪除
kubectl label pod nginx-dev{1..3} env-
kubectl label pod nginx-pro{1..2} env-
kubectl get pods --show-labels
47. 爲所有 Pod 添加標籤 app = nginx 並驗證
kubectl label pod nginx-dev{1..3} app=nginx
kubectl label pod nginx-pro{1..2} app=nginx
kubectl get pods --show-labels
48. 獲取所有帶有標籤的節點(如果使用 minikube,則只會獲得主節點)
kubectl get nodes --show-labels
```
NAME STATUS ROLES AGE VERSION LABELS
k8s-master Ready master 13d v1.16.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8s-node01 Ready <none> 13d v1.16.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node01,kubernetes.io/os=linux
k8s-node02 Ready <none> 13d v1.16.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node02,kubernetes.io/os=linux
```
49. 標記節點(如果正在使用,則爲 minikube)nodeName = nginxnode
kubectl label node minikube nodeName=nginxnode
50. 建一個標籤爲 nginx=dev 的 Pod 並將其部署在此節點上
kubectl label node k8s-node01 nginx=dev
kubectl label node k8s-node02 nginx=pro
參考文獻
medium.com/bb-tutorials-and-thoughts/practice-enough-with-these-questions-for-the-ckad-exam
# Pod 設計、狀態持久性
52. 使用節點選擇器驗證已調度的 Pod
pod.yaml
```
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
nodeSelector:
nginx: dev
status: {}
```
在 50 基礎上
kubectl describe po nginx | grep Node-Selectors
```
Node-Selectors: nginx=dev
```
53. 驗證我們剛剛創建的 Pod Nginx 是否具有 nginx=dev 這個標籤
kubectl describe po nginx | grep Labels
```
Labels: run=nginx
```
54. 用 name=webapp 註釋 Pod nginx-dev.*、nginx-pro.*
kubectl annotate po nginx-dev{1..3} name=webapp
kubectl annotate po nginx-pro{1..2} name=webapp
55. 驗證已正確註釋的 Pod
kubectl describe po nginx-dev{1..3} | grep -i annotations
kubectl describe po nginx-pro{1..2} | grep -i annotations
56. 刪除 Pod 上的註釋並驗證
kubectl annotate po nginx-dev{1..3} name-
kubectl annotate po nginx-pro{1..2} name-
kubectl describe po nginx-dev{1..3} | grep -i annotations
kubectl describe po nginx-pro{1..2} | grep -i annotations
57. 刪除到目前爲止我們創建的所有 Pod
kubectl delete pod --all
58. 創建一個名爲 webapp 的 Deployment,它帶有 5 個副本的鏡像 Nginx
kubectl create deployment webapp --image=nginx --dry-run -o yaml > webapp-deployment.yaml
更改 webapp-deployment 的 replicas 爲 5
```
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: webapp
name: webapp
spec:
replicas: 5
selector:
matchLabels:
app: webapp
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: webapp
spec:
containers:
- image: nginx
name: nginx
resources: {}
status: {}
```
59. 用標籤獲取我們剛剛創建的 Deployment
kubectl get deploy webapp --show-labels
```
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
webapp 5/5 5 5 3m59s app=webapp
```
60. 導出該 Deployment 的 yaml 文件
kubectl get deploy webapp -o yaml
```
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"app":"webapp"},"name":"webapp","namespace":"default"},"spec":{"replicas":5,"selector":{"matchLabels":{"app":"webapp"}},"strategy":{},"template":{"metadata":{"creationTimestamp":null,"labels":{"app":"webapp"}},"spec":{"containers":[{"image":"nginx","name":"nginx","resources":{}}]}}},"status":{}}
creationTimestamp: "2020-01-08T05:52:11Z"
generation: 1
labels:
app: webapp
name: webapp
namespace: default
resourceVersion: "30357"
selfLink: /apis/apps/v1/namespaces/default/deployments/webapp
uid: defb62c1-1b1d-43db-ab43-5cfe90f67d68
spec:
progressDeadlineSeconds: 600
replicas: 5
revisionHistoryLimit: 10
selector:
matchLabels:
app: webapp
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: webapp
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 5
conditions:
- lastTransitionTime: "2020-01-08T05:52:26Z"
lastUpdateTime: "2020-01-08T05:52:26Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2020-01-08T05:52:11Z"
lastUpdateTime: "2020-01-08T05:52:29Z"
message: ReplicaSet "webapp-58867d7bbb" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 5
replicas: 5
updatedReplicas: 5
```
61. 獲取該 Deployment 的 Pod
kubectl get deploy --show-labels
```
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
nginx-deployment 3/3 3 3 19h app=nginx
webapp 5/5 5 5 7m1s app=webapp
```
kubectl get po -l app=webapp
```
NAME READY STATUS RESTARTS AGE
webapp-58867d7bbb-4q5nn 1/1 Running 0 7m29s
webapp-58867d7bbb-9jxc7 1/1 Running 0 7m29s
webapp-58867d7bbb-hxfv6 1/1 Running 0 7m29s
webapp-58867d7bbb-qj2nk 1/1 Running 0 7m29s
webapp-58867d7bbb-wmms4 1/1 Running 0 7m29s
```
62. 將該 Deployment 從 5 個副本擴展到 20 個副本並驗證
kubectl scale deploy webapp --replicas=20
嘗試 --replicas=1000 機器配置過低 會發生問題
kubectl get po -l app=webapp
63. 獲取該 Deployment 的 rollout 狀態
kubectl rollout status deploy webapp
64. 獲取使用該 Deployment 創建的副本集
kubectl get rs -l app=webapp
65. 獲取該 Deployment 的副本集和 Pod 的 yaml
kubectl get rs -l app=webapp -o yaml
kubectl get po -l app=webapp -o yaml
66. 刪除剛創建的 Deployment,並查看所有 Pod 是否已被刪除
kubectl delete deploy webapp
kubectl get po -l app=webapp -w
67. 使用鏡像 nginx:1.17.1 和容器端口 80 創建 webapp Deployment,並驗證鏡像版本
kubectl create deploy webapp --image=nginx:1.17.1 --dry-run -o yaml > webapp.yaml
```
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: webapp
name: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: webapp
spec:
containers:
- image: nginx:1.17.1
name: nginx
ports:
- containerPort: 80
resources: {}
status: {}
```
68. 使用鏡像版本 1.17.4 更新 Deployment 並驗證
kubectl set image deploy/webapp nginx=nginx:1.17.4
kubectl get deploy -o wide
```
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
nginx-deployment 3/3 3 3 20h nginx nginx:1.7.9 app=nginx
webapp 1/1 1 1 5m7s nginx nginx:1.17.4 app=webapp
```
kubectl describe deploy webapp | grep Image
```
Image: nginx:1.17.4
```
69. 檢查 rollout 歷史記錄,並確保更新後一切正常
kubectl rollout history deploy webapp
```
deployment.apps/webapp
REVISION CHANGE-CAUSE
1 <none>
2 <none>
```
kubectl get deploy webapp --show-labels
kubectl get rs -l app=webapp
kubectl get po -l app=webapp
70. 撤消之前使用版本 1.17.1 的 Deployment,並驗證鏡像是否還有老版本
kubectl rollout undo deploy webapp
kubectl rollout history deploy webapp
```
deployment.apps/webapp
REVISION CHANGE-CAUSE
2 <none>
3 <none>
```
kubectl describe deploy webapp | grep Image
71. 使用鏡像版本 1.16.1 更新 Deployment,並驗證鏡像、檢查 rollout 歷史記錄
kubectl set image deploy/webapp nginx=nginx:1.16.1
kubectl rollout status deploy webapp
```
Waiting for deployment "webapp" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "webapp" rollout to finish: 1 old replicas are pending termination...
deployment "webapp" successfully rolled out
```
kubectl rollout history deploy webapp
```
deployment.apps/webapp
REVISION CHANGE-CAUSE
2 <none>
3 <none>
4 <none>
```
```
kubectl describe deploy webapp | grep Image
Image: nginx:1.16.1
```
72. 將 Deployment 更新到鏡像 1.17.1 並確認一切正常
kubectl rollout undo deploy webapp --to-revision=3
kubectl rollout history deploy webapp
kubectl describe deploy webapp | grep Image
73. 使用錯誤的鏡像版本 1.100 更新 Deployment,並驗證有問題
kubectl set image deploy/webapp nginx=nginx:1.10000
kubectl rollout history deploy webapp
kubectl get pods
kubectl describe po pod名稱
Warning Failed 2s (x2 over 52s) kubelet, k8s-node01 Error: ImagePullBackOff
74. 撤消使用先前版本的 Deployment,並確認一切正常
kubectl rollout undo deploy webapp
kubectl rollout status deploy webapp
kubectl get pods
75. 檢查該 Deployment 的特定修訂版本的歷史記錄
kubectl rollout history deploy webapp --revision=特定版本記錄號
76. 暫停 Deployment rollout
kubectl rollout pause deploy webapp
77. 用最新版本的鏡像更新 Deployment,並檢查歷史記錄
kubectl set image deploy/webapp nginx=nginx:lastest
kubectl rollout history deploy webapp
78. 恢復 Deployment rollout
kubectl rollout resume deploy webapp
79. 檢查 rollout 歷史記錄,確保是最新版本
kubectl rollout history deploy webapp
kubectl rollout history deploy webapp --revision=9
80. 將自動伸縮應用到該 Deployment 中,最少副本數爲 10,最大副本數爲 20,
目標 CPU 利用率 85%,並驗證 hpa 已創建,將副本數從 1 個增加到 10 個
kubectl autoscale deploy webapp --min=10 --max=20 --cpu-percent=85
```
kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
webapp Deployment/webapp <unknown>/85% 10 20 0 9s
```
kubectl get po -l app=webapp
```
NAME READY STATUS RESTARTS AGE
webapp-7668577c8f-98ltc 1/1 Running 0 29s
webapp-7668577c8f-bjmhz 1/1 Running 0 29s
webapp-7668577c8f-cbl2k 1/1 Running 0 29s
webapp-7668577c8f-dnb5z 1/1 Running 0 2m54s
webapp-7668577c8f-dvfrt 1/1 Running 0 29s
webapp-7668577c8f-gcqgt 1/1 Running 0 29s
webapp-7668577c8f-ht8pg 1/1 Running 0 29s
webapp-7668577c8f-nhjzs 1/1 Running 0 29s
webapp-7668577c8f-nsmqd 1/1 Running 0 29s
webapp-7668577c8f-rjg58 1/1 Running 0 29s
```
81. 通過刪除剛剛創建的 Deployment 和 hpa 來清理集羣
kubectl delete deploy webapp
kubectl delete hpa webapp
82. 用鏡像 node 創建一個 Job,並驗證是否有對應的 Pod 創建
kubectl create job nodeversion --image=node -- node v
kubectl get job -w
kubectl get pod
83. 獲取剛剛創建的 Job 的日誌
kubectl logs pod 名稱
84. 用鏡像 busybox 輸出 Job 的 yaml 文件,並回顯“Hello I am from job”
kubectl create job hello-job --image=busybox --dry-run -o yaml -- echo "Hello I am from job"
```
apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: null
name: hello-job
spec:
template:
metadata:
creationTimestamp: null
spec:
containers:
- command:
- echo
- Hello I am from job
image: busybox
name: hello-job
resources: {}
restartPolicy: Never
status: {}
```
85. 將上面的 yaml 文件複製到 hello-job.yaml 文件並創建 Job
kubectl create job hello-job --image=busybox --dry-run -o yaml -- echo "Hello I am from job" > hello-job.yaml
kubectl apply -f hello-job.yaml
86. 驗證 Job 並創建關聯的容器,檢查日誌
kubectl get pod
kubectl get po
kubectl logs po 名稱
87. 刪除我們剛剛創建的 Job
kubectl delete job hello-job
88. 創建一個相同的 Job,並使它一個接一個地運行 10 次
kubectl create job hello-job --image=busybox --dry-run -o yaml -- echo "Hello I am from job" > 10-job.yaml
在 10-job.yaml 添加 completions: 10
```
apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: null
name: hello-job
spec:
completions: 10
template:
metadata:
creationTimestamp: null
spec:
containers:
- command:
- echo
- Hello I am from Job
image: busybox
name: hello-job
resources: {}
restartPolicy: Never
status: {}
```
kubectl get job -w
kubectl get po
```
NAME COMPLETIONS DURATION AGE
hello-job 9/10 53s 53s
nodeversion 1/1 3m14s 18m
hello-job 10/10 59s 59s
```
89. 運行 10 次,確認已創建 10 個 Pod,並在完成後刪除它們
kubectl delete job hello-job
90. 創建相同的 Job 並使它並行運行 10 次
kubectl create job hello-job --image=busybox --dry-run -o yaml -- echo "Hello I am from job" > 10-parallelism-job.yaml
```
apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: null
name: hello-job
spec:
parallelism: 10
template:
metadata:
creationTimestamp: null
spec:
containers:
- command:
- echo
- Hello I am from Job
image: busybox
name: hello-job
resources: {}
restartPolicy: Never
status: {}
```
91. 並行運行 10 次,確認已創建 10 個 Pod,並在完成後將其刪除
kubectl get job -w
kubectl get po
92. 創建一個帶有 busybox 鏡像的 Cronjob,每分鐘打印一次來自 Kubernetes 集羣消息的日期和 hello
kubectl create cronjob date-job --image=busybox --schedule="*/1 * * * *" -- bin/sh -c "date; echo Hello from kubernetes cluster"
kubectl get cronjob
kubectl get po
kubectl logs pod 名稱
93. 輸出上述 cronjob 的 yaml 文件
kubectl get cj date-job -o yaml
```
apiVersion: batch/v1beta1
kind: CronJob
metadata:
creationTimestamp: "2020-01-08T08:50:52Z"
name: date-job
namespace: default
resourceVersion: "50543"
selfLink: /apis/batch/v1beta1/namespaces/default/cronjobs/date-job
uid: 22d49b8d-58ec-468a-b589-d5f60f0030c0
spec:
concurrencyPolicy: Allow
failedJobsHistoryLimit: 1
jobTemplate:
metadata:
creationTimestamp: null
name: date-job
spec:
template:
metadata:
creationTimestamp: null
spec:
containers:
- command:
- bin/sh
- -c
- date; echo Hello from kubernetes cluster
image: busybox
imagePullPolicy: Always
name: date-job
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: OnFailure
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
schedule: '*/1 * * * *'
successfulJobsHistoryLimit: 3
suspend: false
status:
lastScheduleTime: "2020-01-08T08:51:00Z"
```
94. 驗證 cronJob 爲每分鐘運行創建一個單獨的 Job 和 Pod,並驗證 Pod 的日誌
kubectl get job
kubectl get po
kubectl logs date-job-<jobid>-<pod>
95. 刪除 cronJob,並驗證所有關聯的 Job 和 Pod 也都被刪除
kubectl delete cj date-job
// verify pods and jobs
kubectl get po
kubectl get job
96. 列出集羣中的持久卷
kubectl get pv
97. 創建一個名爲 task-pv-volume 的 PersistentVolume,其 storgeClassName 爲 manual,storage 爲 10Gi,accessModes 爲 ReadWriteOnce,hostPath 爲 /mnt/data
task-pv-volume.yaml
```
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
storageClassName: manual
hostPath:
path: "/mnt/data"
```
kubectl get pv
```
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
task-pv-volume 10Gi RWO Retain Available manual 88s
```
98. 創建一個存儲至少 3Gi、訪問模式爲 ReadWriteOnce 的 PersistentVolumeClaim,並確認它的狀態是否是綁定的
kubectl create -f task-pv-claim.yaml
kubectl get pvc
```
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
storageClassName: manual
```
99. 刪除我們剛剛創建的持久卷和 PersistentVolumeClaim
kubectl delete pvc task-pv-claim
kubectl delete pv task-pv-volume
100. 使用鏡像 Redis 創建 Pod,並配置一個在 Pod 生命週期內可持續使用的卷
redis-storage.yaml
```
apiVersion: v1
kind: Pod
metadata:
name: redis
spec:
containers:
- name: redis
image: redis
volumeMounts:
- name: redis-storage
mountPath: /data/redis
volumes:
- name: redis-storage
emptyDir: {}
```
101. 在上面的 Pod 中執行操作,並在 /data/redis 路徑中創建一個名爲 file.txt 的文件,其文本爲“This is the file”,然後打開另一個選項卡,再次使用同一 Pod 執行,並驗證文件是否在同一路徑中
kubectl exec -it redis /bin/sh
cd /data/redis
echo "This is the file" > file.txt
102. 刪除上面的 Pod,然後從相同的 yaml 文件再次創建,並驗證路徑 /data/redis 中是否沒有 file.txt
kubectl delete po redis
kubectl apply -f redis-storage.yaml
kubectl exec -it redis /bin/sh
cat /data/redis/file.txt
cat: /data/redis/file.txt: No such file or directory
103. 創建一個名爲 task-pv-volume 的 PersistentVolume,其 storgeClassName 爲 manual,storage 爲 10Gi,accessModes 爲 ReadWriteOnce,hostPath 爲 /mnt/data;並創建一個存儲至少 3Gi、訪問模式爲 ReadWriteOnce 的 PersistentVolumeClaim,並確認它的狀態是否是綁定的
kubectl create -f task-pv-volume.yaml
kubectl create -f task-pv-claim.yaml
kubectl get pv
kubectl get pvc
配置如下所述:
task-pv-volume.yaml
```
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
storageClassName: manual
hostPath:
path: "/mnt/data"
```
task-pv-claim.yaml
```
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
storageClassName: manual
```
104. 用容器端口 80 和 PersistentVolumeClaim task-pv-claim 創建一個 Nginx 容器,且具有路徑“/usr/share/nginx/html”
task-pv-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
task-pv-volume 10Gi RWO Retain Bound default/task-pv-claim manual 85m
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
task-pv-claim Bound task-pv-volume 10Gi RWO manual 10m
待完善
參考文獻
https://medium.com/bb-tutorials-and-thoughts/practice-enough-with-these-questions-for-the-ckad-exam-2f42d1228552
CKAD 考試習題練習,Kubernetes
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.