什麼是資源對象?
所謂資源對象是指在k8s上創建的資源實例;即通過apiserver提供的各資源api接口(可以理解爲各種資源模板),使用yaml文件或者命令行的方式向對應資源api接口傳遞參數賦值實例化的結果;比如我們在k8s上創建一個pod,那麼我們就需要通過給apiserver交互,傳遞創建pod的相關參數,讓apiserver拿着這些參數去實例化一個pod的相關信息存放在etcd中,然後再由調度器進行調度,由node節點的kubelet執行創建pod;簡單講資源對象就是把k8s之上的api接口進行實例化的結果;
k8s邏輯運行環境
提示:k8s運行環境如上,k8s能夠將多個node節點的底層提供的資源(如內存,cpu,存儲,網絡等)邏輯的整合成一個大的資源池,統一由k8s進行調度編排;用戶只管在k8s上創建各種資源即可,創建完成的資源是通過k8s統一編排調度,用戶無需關注具體資源在那個node上運行,也無需關注node節點資源情況;
k8s的設計理念——分層架構
k8s的設計理念——API設計原則
1、所有API應該是聲明式的;
2、API對象是彼此互補而且可組合的,即“高內聚,松耦合”;
3、高層API以操作意圖爲基礎設計;
4、低層API根據高層API的控制需要設計;
5、儘量避免簡單封裝,不要有在外部API無法顯式知道的內部隱藏的機制;
6、API操作複雜度與對象數量成正比;
7、API對象狀態不能依賴於網絡連接狀態;
8、儘量避免讓操作機制依賴於全局狀態,因爲在分佈式系統中要保證全局狀態的同步是非常困難的;
kubernetes API簡介
提示:在k8s上api分內置api和自定義api;所謂內置api是指部署好k8s集羣后自帶的api接口;自定義api也稱自定義資源(CRD,Custom Resource Definition),部署好k8s之後,通過安裝其他組件等方式擴展出來的api;
apiserver資源組織邏輯
提示:apiserver對於不同資源是通過分類,分組,分版本的方式邏輯組織的,如上圖所示;
k8s內置資源對象簡介
k8s資源對象操作命令
資源配置清單必需字段
1、apiVersion - 創建該對象所使用的Kubernetes API的版本;
2、kind - 想要創建的對象的類型;
3、metadata - 定義識別對象唯一性的數據,包括一個name名稱 、可選的namespace,默認不寫就是default名稱空間;
4、spec:定義資源對象的詳細規範信息(統一的label標籤、容器名稱、鏡像、端口映射等),即用戶期望對應資源處於什麼狀態;
5、status(Pod創建完成後k8s自動生成status狀態),該字段信息由k8s自動維護,用戶無需定義,即對應資源的實際狀態;
Pod資源對象
提示:pod是k8s中最小控制單元,一個pod中可以運行一個或多個容器;一個pod的中的容器是一起調度,即調度的最小單位是pod;pod的生命週期是短暫的,不會自愈,是用完就銷燬的實體;一般我們通過Controller來創建和管理pod;使用控制器創建的pod具有自動恢復功能,即pod狀態不滿足用戶期望狀態,對應控制器會通過重啓或重建的方式,讓對應pod狀態和數量始終和用戶定義的期望狀態一致;
示例:自主式pod配置清單
apiVersion: v1 kind: Pod metadata: name: "pod-demo" namespace: default labels: app: "pod-demo" spec: containers: - name: pod-demo image: "harbor.ik8s.cc/baseimages/nginx:v1" ports: - containerPort: 80 name: http volumeMounts: - name: localtime mountPath: /etc/localtime volumes: - name: localtime hostPath: path: /usr/share/zoneinfo/Asia/Shanghai
應用配置清單
root@k8s-deploy:/yaml# kubectl get pods NAME READY STATUS RESTARTS AGE net-test1 1/1 Running 2 (4m35s ago) 7d7h test 1/1 Running 4 (4m34s ago) 13d test1 1/1 Running 4 (4m35s ago) 13d test2 1/1 Running 4 (4m35s ago) 13d root@k8s-deploy:/yaml# kubectl apply -f pod-demo.yaml pod/pod-demo created root@k8s-deploy:/yaml# kubectl get pods NAME READY STATUS RESTARTS AGE net-test1 1/1 Running 2 (4m47s ago) 7d7h pod-demo 0/1 ContainerCreating 0 4s test 1/1 Running 4 (4m46s ago) 13d test1 1/1 Running 4 (4m47s ago) 13d test2 1/1 Running 4 (4m47s ago) 13d root@k8s-deploy:/yaml# kubectl get pods NAME READY STATUS RESTARTS AGE net-test1 1/1 Running 2 (4m57s ago) 7d7h pod-demo 1/1 Running 0 14s test 1/1 Running 4 (4m56s ago) 13d test1 1/1 Running 4 (4m57s ago) 13d test2 1/1 Running 4 (4m57s ago) 13d root@k8s-deploy:/yaml#
提示:此pod只是在k8s上運行起來,它沒有控制器的監視,對應pod刪除,故障都不會自動恢復;
Job控制器,詳細說明請參考https://www.cnblogs.com/qiuhom-1874/p/14157306.html;
job控制器配置清單示例
apiVersion: batch/v1 kind: Job metadata: name: job-demo namespace: default labels: app: job-demo spec: template: metadata: name: job-demo labels: app: job-demo spec: containers: - name: job-demo-container image: harbor.ik8s.cc/baseimages/centos7:2023 command: ["/bin/sh"] args: ["-c", "echo data init job at `date +%Y-%m-%d_%H-%M-%S` >> /cache/data.log"] volumeMounts: - mountPath: /cache name: cache-volume - name: localtime mountPath: /etc/localtime volumes: - name: cache-volume hostPath: path: /tmp/jobdata - name: localtime hostPath: path: /usr/share/zoneinfo/Asia/Shanghai restartPolicy: Never
提示:定義job資源必須定義restartPolicy;
應用清單
root@k8s-deploy:/yaml# kubectl get pods NAME READY STATUS RESTARTS AGE net-test1 1/1 Running 3 (48m ago) 7d10h pod-demo 1/1 Running 1 (48m ago) 3h32m test 1/1 Running 5 (48m ago) 14d test1 1/1 Running 5 (48m ago) 14d test2 1/1 Running 5 (48m ago) 14d root@k8s-deploy:/yaml# kubectl apply -f job-demo.yaml job.batch/job-demo created root@k8s-deploy:/yaml# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES job-demo-z8gmb 0/1 Completed 0 26s 10.200.211.130 192.168.0.34 <none> <none> net-test1 1/1 Running 3 (49m ago) 7d10h 10.200.211.191 192.168.0.34 <none> <none> pod-demo 1/1 Running 1 (49m ago) 3h32m 10.200.155.138 192.168.0.36 <none> <none> test 1/1 Running 5 (49m ago) 14d 10.200.209.6 192.168.0.35 <none> <none> test1 1/1 Running 5 (49m ago) 14d 10.200.209.8 192.168.0.35 <none> <none> test2 1/1 Running 5 (49m ago) 14d 10.200.211.177 192.168.0.34 <none> <none> root@k8s-deploy:/yaml#
驗證:查看192.168.0.34的/tmp/jobdata目錄下是否有job執行的任務數據?
root@k8s-deploy:/yaml# ssh 192.168.0.34 "ls /tmp/jobdata" data.log root@k8s-deploy:/yaml# ssh 192.168.0.34 "cat /tmp/jobdata/data.log" data init job at 2023-05-06_23-31-32 root@k8s-deploy:/yaml#
提示:可以看到對應job所在宿主機的/tmp/jobdata/目錄下有job執行過後的數據,這說明我們定義的job任務順利完成;
定義並行job
apiVersion: batch/v1 kind: Job metadata: name: job-multi-demo namespace: default labels: app: job-multi-demo spec: completions: 5 template: metadata: name: job-multi-demo labels: app: job-multi-demo spec: containers: - name: job-multi-demo-container image: harbor.ik8s.cc/baseimages/centos7:2023 command: ["/bin/sh"] args: ["-c", "echo data init job at `date +%Y-%m-%d_%H-%M-%S` >> /cache/data.log"] volumeMounts: - mountPath: /cache name: cache-volume - name: localtime mountPath: /etc/localtime volumes: - name: cache-volume hostPath: path: /tmp/jobdata - name: localtime hostPath: path: /usr/share/zoneinfo/Asia/Shanghai restartPolicy: Never
提示:spec字段下使用completions來指定執行任務需要的對應pod的數量;
應用清單
root@k8s-deploy:/yaml# kubectl get pods NAME READY STATUS RESTARTS AGE job-demo-z8gmb 0/1 Completed 0 24m net-test1 1/1 Running 3 (73m ago) 7d11h pod-demo 1/1 Running 1 (73m ago) 3h56m test 1/1 Running 5 (73m ago) 14d test1 1/1 Running 5 (73m ago) 14d test2 1/1 Running 5 (73m ago) 14d root@k8s-deploy:/yaml# kubectl apply -f job-multi-demo.yaml job.batch/job-multi-demo created root@k8s-deploy:/yaml# kubectl get job NAME COMPLETIONS DURATION AGE job-demo 1/1 5s 24m job-multi-demo 1/5 10s 10s root@k8s-deploy:/yaml# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES job-demo-z8gmb 0/1 Completed 0 24m 10.200.211.130 192.168.0.34 <none> <none> job-multi-demo-5vp9w 0/1 Completed 0 12s 10.200.211.144 192.168.0.34 <none> <none> job-multi-demo-frstg 0/1 Completed 0 22s 10.200.211.186 192.168.0.34 <none> <none> job-multi-demo-gd44s 0/1 Completed 0 17s 10.200.211.184 192.168.0.34 <none> <none> job-multi-demo-kfm79 0/1 ContainerCreating 0 2s <none> 192.168.0.34 <none> <none> job-multi-demo-nsmpg 0/1 Completed 0 7s 10.200.211.135 192.168.0.34 <none> <none> net-test1 1/1 Running 3 (73m ago) 7d11h 10.200.211.191 192.168.0.34 <none> <none> pod-demo 1/1 Running 1 (73m ago) 3h56m 10.200.155.138 192.168.0.36 <none> <none> test 1/1 Running 5 (73m ago) 14d 10.200.209.6 192.168.0.35 <none> <none> test1 1/1 Running 5 (73m ago) 14d 10.200.209.8 192.168.0.35 <none> <none> test2 1/1 Running 5 (73m ago) 14d 10.200.211.177 192.168.0.34 <none> <none> root@k8s-deploy:/yaml# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES job-demo-z8gmb 0/1 Completed 0 24m 10.200.211.130 192.168.0.34 <none> <none> job-multi-demo-5vp9w 0/1 Completed 0 33s 10.200.211.144 192.168.0.34 <none> <none> job-multi-demo-frstg 0/1 Completed 0 43s 10.200.211.186 192.168.0.34 <none> <none> job-multi-demo-gd44s 0/1 Completed 0 38s 10.200.211.184 192.168.0.34 <none> <none> job-multi-demo-kfm79 0/1 Completed 0 23s 10.200.211.140 192.168.0.34 <none> <none> job-multi-demo-nsmpg 0/1 Completed 0 28s 10.200.211.135 192.168.0.34 <none> <none> net-test1 1/1 Running 3 (73m ago) 7d11h 10.200.211.191 192.168.0.34 <none> <none> pod-demo 1/1 Running 1 (73m ago) 3h57m 10.200.155.138 192.168.0.36 <none> <none> test 1/1 Running 5 (73m ago) 14d 10.200.209.6 192.168.0.35 <none> <none> test1 1/1 Running 5 (73m ago) 14d 10.200.209.8 192.168.0.35 <none> <none> test2 1/1 Running 5 (73m ago) 14d 10.200.211.177 192.168.0.34 <none> <none> root@k8s-deploy:/yaml#
驗證:查看192.168.0.34的/tmp/jobdata/目錄下是否有job數據產生?
root@k8s-deploy:/yaml# ssh 192.168.0.34 "ls /tmp/jobdata" data.log root@k8s-deploy:/yaml# ssh 192.168.0.34 "cat /tmp/jobdata/data.log" data init job at 2023-05-06_23-31-32 data init job at 2023-05-06_23-55-44 data init job at 2023-05-06_23-55-49 data init job at 2023-05-06_23-55-54 data init job at 2023-05-06_23-55-59 data init job at 2023-05-06_23-56-04 root@k8s-deploy:/yaml#
定義並行度
apiVersion: batch/v1 kind: Job metadata: name: job-multi-demo2 namespace: default labels: app: job-multi-demo2 spec: completions: 6 parallelism: 2 template: metadata: name: job-multi-demo2 labels: app: job-multi-demo2 spec: containers: - name: job-multi-demo2-container image: harbor.ik8s.cc/baseimages/centos7:2023 command: ["/bin/sh"] args: ["-c", "echo data init job at `date +%Y-%m-%d_%H-%M-%S` >> /cache/data.log"] volumeMounts: - mountPath: /cache name: cache-volume - name: localtime mountPath: /etc/localtime volumes: - name: cache-volume hostPath: path: /tmp/jobdata - name: localtime hostPath: path: /usr/share/zoneinfo/Asia/Shanghai restartPolicy: Never
提示:在spec字段下使用parallelism字段來指定並行度,即一次幾個pod同時運行;上述清單表示,一次2個pod同時運行,總共需要6個pod;
應用清單
root@k8s-deploy:/yaml# kubectl get jobs NAME COMPLETIONS DURATION AGE job-demo 1/1 5s 34m job-multi-demo 5/5 25s 9m56s root@k8s-deploy:/yaml# kubectl apply -f job-multi-demo2.yaml job.batch/job-multi-demo2 created root@k8s-deploy:/yaml# kubectl get jobs NAME COMPLETIONS DURATION AGE job-demo 1/1 5s 34m job-multi-demo 5/5 25s 10m job-multi-demo2 0/6 2s 3s root@k8s-deploy:/yaml# kubectl get pods NAME READY STATUS RESTARTS AGE job-demo-z8gmb 0/1 Completed 0 34m job-multi-demo-5vp9w 0/1 Completed 0 10m job-multi-demo-frstg 0/1 Completed 0 10m job-multi-demo-gd44s 0/1 Completed 0 10m job-multi-demo-kfm79 0/1 Completed 0 9m59s job-multi-demo-nsmpg 0/1 Completed 0 10m job-multi-demo2-7ppxc 0/1 Completed 0 10s job-multi-demo2-mxbtq 0/1 Completed 0 5s job-multi-demo2-rhgh7 0/1 Completed 0 4s job-multi-demo2-th6ff 0/1 Completed 0 11s net-test1 1/1 Running 3 (83m ago) 7d11h pod-demo 1/1 Running 1 (83m ago) 4h6m test 1/1 Running 5 (83m ago) 14d test1 1/1 Running 5 (83m ago) 14d test2 1/1 Running 5 (83m ago) 14d root@k8s-deploy:/yaml# kubectl get pods NAME READY STATUS RESTARTS AGE job-demo-z8gmb 0/1 Completed 0 34m job-multi-demo-5vp9w 0/1 Completed 0 10m job-multi-demo-frstg 0/1 Completed 0 10m job-multi-demo-gd44s 0/1 Completed 0 10m job-multi-demo-kfm79 0/1 Completed 0 10m job-multi-demo-nsmpg 0/1 Completed 0 10m job-multi-demo2-7ppxc 0/1 Completed 0 16s job-multi-demo2-8bh22 0/1 Completed 0 6s job-multi-demo2-dbjqw 0/1 Completed 0 6s job-multi-demo2-mxbtq 0/1 Completed 0 11s job-multi-demo2-rhgh7 0/1 Completed 0 10s job-multi-demo2-th6ff 0/1 Completed 0 17s net-test1 1/1 Running 3 (83m ago) 7d11h pod-demo 1/1 Running 1 (83m ago) 4h6m test 1/1 Running 5 (83m ago) 14d test1 1/1 Running 5 (83m ago) 14d test2 1/1 Running 5 (83m ago) 14d root@k8s-deploy:/yaml# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES job-demo-z8gmb 0/1 Completed 0 35m 10.200.211.130 192.168.0.34 <none> <none> job-multi-demo-5vp9w 0/1 Completed 0 10m 10.200.211.144 192.168.0.34 <none> <none> job-multi-demo-frstg 0/1 Completed 0 11m 10.200.211.186 192.168.0.34 <none> <none> job-multi-demo-gd44s 0/1 Completed 0 11m 10.200.211.184 192.168.0.34 <none> <none> job-multi-demo-kfm79 0/1 Completed 0 10m 10.200.211.140 192.168.0.34 <none> <none> job-multi-demo-nsmpg 0/1 Completed 0 10m 10.200.211.135 192.168.0.34 <none> <none> job-multi-demo2-7ppxc 0/1 Completed 0 57s 10.200.211.145 192.168.0.34 <none> <none> job-multi-demo2-8bh22 0/1 Completed 0 47s 10.200.211.148 192.168.0.34 <none> <none> job-multi-demo2-dbjqw 0/1 Completed 0 47s 10.200.211.141 192.168.0.34 <none> <none> job-multi-demo2-mxbtq 0/1 Completed 0 52s 10.200.211.152 192.168.0.34 <none> <none> job-multi-demo2-rhgh7 0/1 Completed 0 51s 10.200.211.143 192.168.0.34 <none> <none> job-multi-demo2-th6ff 0/1 Completed 0 58s 10.200.211.136 192.168.0.34 <none> <none> net-test1 1/1 Running 3 (84m ago) 7d11h 10.200.211.191 192.168.0.34 <none> <none> pod-demo 1/1 Running 1 (84m ago) 4h7m 10.200.155.138 192.168.0.36 <none> <none> test 1/1 Running 5 (84m ago) 14d 10.200.209.6 192.168.0.35 <none> <none> test1 1/1 Running 5 (84m ago) 14d 10.200.209.8 192.168.0.35 <none> <none> test2 1/1 Running 5 (84m ago) 14d 10.200.211.177 192.168.0.34 <none> <none> root@k8s-deploy:/yaml#
驗證job數據
提示:可以看到後面job追加的時間幾乎都是兩個重複的,這說明兩個pod同時執行了job裏的任務;
Cronjob控制器,詳細說明請參考https://www.cnblogs.com/qiuhom-1874/p/14157306.html;
示例:定義cronjob
apiVersion: batch/v1 kind: CronJob metadata: name: job-cronjob namespace: default spec: schedule: "*/1 * * * *" jobTemplate: spec: parallelism: 2 template: spec: containers: - name: job-cronjob-container image: harbor.ik8s.cc/baseimages/centos7:2023 command: ["/bin/sh"] args: ["-c", "echo data init job at `date +%Y-%m-%d_%H-%M-%S` >> /cache/cronjob-data.log"] volumeMounts: - mountPath: /cache name: cache-volume - name: localtime mountPath: /etc/localtime volumes: - name: cache-volume hostPath: path: /tmp/jobdata - name: localtime hostPath: path: /usr/share/zoneinfo/Asia/Shanghai restartPolicy: OnFailure
應用清單
root@k8s-deploy:/yaml# kubectl apply -f cronjob-demo.yaml cronjob.batch/job-cronjob created root@k8s-deploy:/yaml# kubectl get cronjob NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE job-cronjob */1 * * * * False 0 <none> 6s root@k8s-deploy:/yaml# kubectl get pods NAME READY STATUS RESTARTS AGE job-cronjob-28056516-njddz 0/1 Completed 0 12s job-cronjob-28056516-wgbns 0/1 Completed 0 12s job-demo-z8gmb 0/1 Completed 0 64m job-multi-demo-5vp9w 0/1 Completed 0 40m job-multi-demo-frstg 0/1 Completed 0 40m job-multi-demo-gd44s 0/1 Completed 0 40m job-multi-demo-kfm79 0/1 Completed 0 40m job-multi-demo-nsmpg 0/1 Completed 0 40m job-multi-demo2-7ppxc 0/1 Completed 0 30m job-multi-demo2-8bh22 0/1 Completed 0 30m job-multi-demo2-dbjqw 0/1 Completed 0 30m job-multi-demo2-mxbtq 0/1 Completed 0 30m job-multi-demo2-rhgh7 0/1 Completed 0 30m job-multi-demo2-th6ff 0/1 Completed 0 30m net-test1 1/1 Running 3 (113m ago) 7d11h pod-demo 1/1 Running 1 (113m ago) 4h36m test 1/1 Running 5 (113m ago) 14d test1 1/1 Running 5 (113m ago) 14d test2 1/1 Running 5 (113m ago) 14d root@k8s-deploy:/yaml# kubectl get cronjob NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE job-cronjob */1 * * * * False 0 12s 108s root@k8s-deploy:/yaml# kubectl get pods NAME READY STATUS RESTARTS AGE job-cronjob-28056516-njddz 0/1 Completed 0 77s job-cronjob-28056516-wgbns 0/1 Completed 0 77s job-cronjob-28056517-d6n9h 0/1 Completed 0 17s job-cronjob-28056517-krsvb 0/1 Completed 0 17s job-demo-z8gmb 0/1 Completed 0 65m job-multi-demo-5vp9w 0/1 Completed 0 41m job-multi-demo-frstg 0/1 Completed 0 41m job-multi-demo-gd44s 0/1 Completed 0 41m job-multi-demo-kfm79 0/1 Completed 0 41m job-multi-demo-nsmpg 0/1 Completed 0 41m job-multi-demo2-7ppxc 0/1 Completed 0 31m job-multi-demo2-8bh22 0/1 Completed 0 31m job-multi-demo2-dbjqw 0/1 Completed 0 31m job-multi-demo2-mxbtq 0/1 Completed 0 31m job-multi-demo2-rhgh7 0/1 Completed 0 31m job-multi-demo2-th6ff 0/1 Completed 0 31m net-test1 1/1 Running 3 (114m ago) 7d11h pod-demo 1/1 Running 1 (114m ago) 4h38m test 1/1 Running 5 (114m ago) 14d test1 1/1 Running 5 (114m ago) 14d test2 1/1 Running 5 (114m ago) 14d root@k8s-deploy:/yaml#
提示:cronjob 默認保留最近3個歷史記錄;
驗證:查看週期執行任務的數據
提示:從上面的時間就可以看到每過一分鐘就有兩個pod執行一次任務;
RC/RS 副本控制器
RC(Replication Controller),副本控制器,該控制器主要負責控制pod副本數量始終滿足用戶期望的副本數量,該副本控制器是第一代pod副本控制器,僅支持selector = !=;
rc控制器示例
apiVersion: v1 kind: ReplicationController metadata: name: ng-rc spec: replicas: 2 selector: app: ng-rc-80 template: metadata: labels: app: ng-rc-80 spec: containers: - name: pod-demo image: "harbor.ik8s.cc/baseimages/nginx:v1" ports: - containerPort: 80 name: http volumeMounts: - name: localtime mountPath: /etc/localtime volumes: - name: localtime hostPath: path: /usr/share/zoneinfo/Asia/Shanghai
應用配置清單
root@k8s-deploy:/yaml# kubectl get pods NAME READY STATUS RESTARTS AGE test 1/1 Running 6 (11m ago) 16d test1 1/1 Running 6 (11m ago) 16d test2 1/1 Running 6 (11m ago) 16d root@k8s-deploy:/yaml# kubectl apply -f rc-demo.yaml replicationcontroller/ng-rc created root@k8s-deploy:/yaml# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ng-rc-l7xmp 1/1 Running 0 10s 10.200.211.136 192.168.0.34 <none> <none> ng-rc-wl5d6 1/1 Running 0 9s 10.200.155.185 192.168.0.36 <none> <none> test 1/1 Running 6 (11m ago) 16d 10.200.209.24 192.168.0.35 <none> <none> test1 1/1 Running 6 (11m ago) 16d 10.200.209.31 192.168.0.35 <none> <none> test2 1/1 Running 6 (11m ago) 16d 10.200.211.186 192.168.0.34 <none> <none> root@k8s-deploy:/yaml# kubectl get rc NAME DESIRED CURRENT READY AGE ng-rc 2 2 2 25s root@k8s-deploy:/yaml#
驗證:修改pod標籤,看看對應pod是否會重新創建?
root@k8s-deploy:/yaml# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS ng-rc-l7xmp 1/1 Running 0 2m32s app=ng-rc-80 ng-rc-wl5d6 1/1 Running 0 2m31s app=ng-rc-80 test 1/1 Running 6 (13m ago) 16d run=test test1 1/1 Running 6 (13m ago) 16d run=test1 test2 1/1 Running 6 (13m ago) 16d run=test2 root@k8s-deploy:/yaml# kubectl label pod/ng-rc-l7xmp app=nginx-demo --overwrite pod/ng-rc-l7xmp labeled root@k8s-deploy:/yaml# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS ng-rc-l7xmp 1/1 Running 0 4m42s app=nginx-demo ng-rc-rxvd4 0/1 ContainerCreating 0 3s app=ng-rc-80 ng-rc-wl5d6 1/1 Running 0 4m41s app=ng-rc-80 test 1/1 Running 6 (15m ago) 16d run=test test1 1/1 Running 6 (15m ago) 16d run=test1 test2 1/1 Running 6 (15m ago) 16d run=test2 root@k8s-deploy:/yaml# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS ng-rc-l7xmp 1/1 Running 0 4m52s app=nginx-demo ng-rc-rxvd4 1/1 Running 0 13s app=ng-rc-80 ng-rc-wl5d6 1/1 Running 0 4m51s app=ng-rc-80 test 1/1 Running 6 (16m ago) 16d run=test test1 1/1 Running 6 (16m ago) 16d run=test1 test2 1/1 Running 6 (16m ago) 16d run=test2 root@k8s-deploy:/yaml# kubectl label pod/ng-rc-l7xmp app=ng-rc-80 --overwrite pod/ng-rc-l7xmp labeled root@k8s-deploy:/yaml# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS ng-rc-l7xmp 1/1 Running 0 5m27s app=ng-rc-80 ng-rc-wl5d6 1/1 Running 0 5m26s app=ng-rc-80 test 1/1 Running 6 (16m ago) 16d run=test test1 1/1 Running 6 (16m ago) 16d run=test1 test2 1/1 Running 6 (16m ago) 16d run=test2 root@k8s-deploy:/yaml#
提示:rc控制器是通過標籤選擇器來識別對應pod是否歸屬對應rc控制器管控,如果發現對應pod的標籤發生改變,那麼rc控制器會通過新建或刪除的方法將對應pod數量始終和用戶定義的數量保持一致;
RS(ReplicaSet),副本控制器,該副本控制器和rc類似,都是通過標籤選擇器來匹配歸屬自己管控的pod數量,如果標籤或對應pod數量少於或多餘用戶期望的數量,該控制器會通過新建或刪除pod的方式將對應pod數量始終和用戶期望的pod數量保持一致;rs控制器和rc控制器唯一區別就是rs控制器支持selector = !=精確匹配外,還支持模糊匹配in notin;是k8s之上的第二代pod副本控制器;
rs控制器示例
apiVersion: apps/v1 kind: ReplicaSet metadata: name: rs-demo labels: app: rs-demo spec: replicas: 3 selector: matchLabels: app: rs-demo template: metadata: labels: app: rs-demo spec: containers: - name: rs-demo image: "harbor.ik8s.cc/baseimages/nginx:v1" ports: - name: web containerPort: 80 protocol: TCP env: - name: NGX_VERSION value: 1.16.1 volumeMounts: - name: localtime mountPath: /etc/localtime volumes: - name: localtime hostPath: path: /usr/share/zoneinfo/Asia/Shanghai
應用配置清單
驗證:修改pod標籤,看看對應pod是否會發生變化?
root@k8s-deploy:/yaml# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS ng-rc-l7xmp 1/1 Running 0 18m app=ng-rc-80 ng-rc-wl5d6 1/1 Running 0 18m app=ng-rc-80 rs-demo-nzmqs 1/1 Running 0 71s app=rs-demo rs-demo-v2vb6 1/1 Running 0 71s app=rs-demo rs-demo-x27fv 1/1 Running 0 71s app=rs-demo test 1/1 Running 6 (29m ago) 16d run=test test1 1/1 Running 6 (29m ago) 16d run=test1 test2 1/1 Running 6 (29m ago) 16d run=test2 root@k8s-deploy:/yaml# kubectl label pod/rs-demo-nzmqs app=nginx --overwrite pod/rs-demo-nzmqs labeled root@k8s-deploy:/yaml# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS ng-rc-l7xmp 1/1 Running 0 19m app=ng-rc-80 ng-rc-wl5d6 1/1 Running 0 19m app=ng-rc-80 rs-demo-bdfdd 1/1 Running 0 4s app=rs-demo rs-demo-nzmqs 1/1 Running 0 103s app=nginx rs-demo-v2vb6 1/1 Running 0 103s app=rs-demo rs-demo-x27fv 1/1 Running 0 103s app=rs-demo test 1/1 Running 6 (30m ago) 16d run=test test1 1/1 Running 6 (30m ago) 16d run=test1 test2 1/1 Running 6 (30m ago) 16d run=test2 root@k8s-deploy:/yaml# kubectl label pod/rs-demo-nzmqs app=rs-demo --overwrite pod/rs-demo-nzmqs labeled root@k8s-deploy:/yaml# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS ng-rc-l7xmp 1/1 Running 0 19m app=ng-rc-80 ng-rc-wl5d6 1/1 Running 0 19m app=ng-rc-80 rs-demo-nzmqs 1/1 Running 0 119s app=rs-demo rs-demo-v2vb6 1/1 Running 0 119s app=rs-demo rs-demo-x27fv 1/1 Running 0 119s app=rs-demo test 1/1 Running 6 (30m ago) 16d run=test test1 1/1 Running 6 (30m ago) 16d run=test1 test2 1/1 Running 6 (30m ago) 16d run=test2 root@k8s-deploy:/yaml#
提示:可以看到當我們修改pod標籤爲其他標籤以後,對應rs控制器會新建一個pod,其標籤爲app=rs-demo,這是因爲當我們修改pod標籤以後,rs控制器發現標籤選擇器匹配的pod數量少於用戶定義的數量,所以rs控制器會新建一個標籤爲app=rs-demo的pod;當我們把pod標籤修改爲rs-demo時,rs控制器發現對應標籤選擇器匹配pod數量多餘用戶期望的pod數量,此時rs控制器會通過刪除pod方法,讓app=rs-demo標籤的pod和用戶期望的pod數量保持一致;
Deployment 副本控制器,詳細說明請參考https://www.cnblogs.com/qiuhom-1874/p/14149042.html;
Deployment副本控制器時k8s第三代pod副本控制器,該控制器比rs控制器更高級,除了有rs的功能之外,還有很多高級功能,,比如說最重要的滾動升級、回滾等;
deploy控制器示例
apiVersion: apps/v1 kind: Deployment metadata: name: deploy-demo namespace: default labels: app: deploy-demo spec: selector: matchLabels: app: deploy-demo replicas: 2 template: metadata: labels: app: deploy-demo spec: containers: - name: deploy-demo image: "harbor.ik8s.cc/baseimages/nginx:v1" ports: - containerPort: 80 name: http volumeMounts: - name: localtime mountPath: /etc/localtime volumes: - name: localtime hostPath: path: /usr/share/zoneinfo/Asia/Shanghai
應用配置清單
提示:deploy控制器是通過創建rs控制器來實現管控對應pod數量;
通過修改鏡像版本來更新pod版本
應用配置清單
使用命令更新pod版本
查看rs更新歷史版本
查看更新歷史記錄
提示:這裏歷史記錄中沒有記錄版本信息,原因是默認不記錄,需要記錄歷史版本,可以手動使用--record選項來記錄版本信息;如下所示
查看某個歷史版本的詳細信息
提示:查看某個歷史版本的詳細信息,加上--revision=對應歷史版本的編號即可;
回滾到上一個版本
提示:使用kubectl rollout undo 命令可以將對應deploy回滾到上一個版本;
回滾指定編號的歷史版本
提示:使用--to-revision選項來指定對應歷史版本編號,即可回滾到對應編號的歷史版本;
Service資源,詳細說明請參考https://www.cnblogs.com/qiuhom-1874/p/14161950.html;
nodeport類型的service訪問流程
nodeport類型service主要解決了k8s集羣外部客戶端訪問pod,其流程是外部客戶端訪問k8s集羣任意node節點的對應暴露的端口,被訪問的node或通過本機的iptables或ipvs規則將外部客戶端流量轉發給對應pod之上,從而實現外部客戶端訪問k8s集羣pod的目的;通常使用nodeport類型service爲了方便外部客戶端訪問,都會在集羣外部部署一個負載均衡器,即外部客戶端訪問對應負載均衡器的對應端口,通過負載均衡器將外部客戶端流量引入k8s集羣,從而完成對pod的訪問;
ClusterIP類型svc示例
apiVersion: v1 kind: Service metadata: name: ngx-svc namespace: default spec: selector: app: deploy-demo type: ClusterIP ports: - name: http protocol: TCP port: 80 targetPort: 80
應用配置清單
提示:可以看到創建clusterip類型service以後,對應svc會有一個clusterip,後端endpoints會通過標籤選擇器去關聯對應pod,即我們訪問對應svc的clusterip,對應流量會被轉發至後端endpoint pod之上進行響應;不過這種clusterip類型svc只能在k8s集羣內部客戶端訪問,集羣外部客戶端是訪問不到的,原因是這個clusterip是k8s內部網絡IP地址;
驗證,訪問10.100.100.23的80端口,看看對應後端nginxpod是否可以正常被訪問呢?
root@k8s-node01:~# curl 10.100.100.23 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> root@k8s-node01:~#
nodeport類型service示例
apiVersion: v1 kind: Service metadata: name: ngx-nodeport-svc namespace: default spec: selector: app: deploy-demo type: NodePort ports: - name: http protocol: TCP port: 80 targetPort: 80 nodePort: 30012
提示:nodeport類型service只需要在clusterip類型的svc之上修改type爲NodePort,然後再ports字段下用nodePort指定對應node端口即可;
應用配置清單
root@k8s-deploy:/yaml# kubectl apply -f nodeport-svc-demo.yaml service/ngx-nodeport-svc created root@k8s-deploy:/yaml# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 16d ngx-nodeport-svc NodePort 10.100.209.225 <none> 80:30012/TCP 11s root@k8s-deploy:/yaml# kubectl describe svc ngx-nodeport-svc Name: ngx-nodeport-svc Namespace: default Labels: <none> Annotations: <none> Selector: app=deploy-demo Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.100.209.225 IPs: 10.100.209.225 Port: http 80/TCP TargetPort: 80/TCP NodePort: http 30012/TCP Endpoints: 10.200.155.178:80,10.200.211.138:80 Session Affinity: None External Traffic Policy: Cluster Events: <none> root@k8s-deploy:/yaml#
驗證:訪問k8s集羣任意node的30012端口,看看對應nginxpod是否能夠被訪問到?
root@k8s-deploy:/yaml# curl 192.168.0.34:30012 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> root@k8s-deploy:/yaml#
提示:可以看到k8s外部客戶端訪問k8snode節點的30012端口是能夠正常訪問到nginxpod;當然集羣內部的客戶端是可以通過對應生成的clusterip進行訪問的;
root@k8s-node01:~# curl 10.100.209.225:30012 curl: (7) Failed to connect to 10.100.209.225 port 30012 after 0 ms: Connection refused root@k8s-node01:~# curl 127.0.0.1:30012 curl: (7) Failed to connect to 127.0.0.1 port 30012 after 0 ms: Connection refused root@k8s-node01:~# curl 192.168.0.34:30012 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> root@k8s-node01:~#
提示:集羣內部客戶端只能訪問clusterip的80端口,或者訪問node的對外IP的30012端口;
Volume資源,詳細說明請參考https://www.cnblogs.com/qiuhom-1874/p/14180752.html;
pod掛載nfs的使用
在nfs服務器上準備數據目錄
root@harbor:~# cat /etc/exports # /etc/exports: the access control list for filesystems which may be exported # to NFS clients. See exports(5). # # Example for NFSv2 and NFSv3: # /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check) # # Example for NFSv4: # /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check) # /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check) # /data/k8sdata/kuboard *(rw,no_root_squash) /data/volumes *(rw,no_root_squash) /pod-vol *(rw,no_root_squash) root@harbor:~# mkdir -p /pod-vol root@harbor:~# ls /pod-vol -d /pod-vol root@harbor:~# exportfs -av exportfs: /etc/exports [1]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/kuboard". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x exportfs: /etc/exports [2]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/volumes". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/pod-vol". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x exporting *:/pod-vol exporting *:/data/volumes exporting *:/data/k8sdata/kuboard root@harbor:~#
在pod上掛載nfs目錄
apiVersion: apps/v1 kind: Deployment metadata: name: ngx-nfs-80 namespace: default labels: app: ngx-nfs-80 spec: selector: matchLabels: app: ngx-nfs-80 replicas: 1 template: metadata: labels: app: ngx-nfs-80 spec: containers: - name: ngx-nfs-80 image: "harbor.ik8s.cc/baseimages/nginx:v1" resources: requests: cpu: 100m memory: 100Mi limits: cpu: 100m memory: 100Mi ports: - containerPort: 80 name: ngx-nfs-80 volumeMounts: - name: localtime mountPath: /etc/localtime - name: nfs-vol mountPath: /usr/share/nginx/html/ volumes: - name: localtime hostPath: path: /usr/share/zoneinfo/Asia/Shanghai - name: nfs-vol nfs: server: 192.168.0.42 path: /pod-vol restartPolicy: Always --- apiVersion: v1 kind: Service metadata: name: ngx-nfs-svc namespace: default spec: selector: app: ngx-nfs-80 type: NodePort ports: - name: ngx-nfs-svc protocol: TCP port: 80 targetPort: 80 nodePort: 30013
應用配置清單
root@k8s-deploy:/yaml# kubectl apply -f nfs-vol.yaml deployment.apps/ngx-nfs-80 created service/ngx-nfs-svc created root@k8s-deploy:/yaml# kubectl get pods NAME READY STATUS RESTARTS AGE deploy-demo-6849bdf444-pvsc9 1/1 Running 1 (57m ago) 46h deploy-demo-6849bdf444-sg8fz 1/1 Running 1 (57m ago) 46h ng-rc-l7xmp 1/1 Running 1 (57m ago) 47h ng-rc-wl5d6 1/1 Running 1 (57m ago) 47h ngx-nfs-80-66c9697cf4-8pm9k 1/1 Running 0 7s rs-demo-nzmqs 1/1 Running 1 (57m ago) 47h rs-demo-v2vb6 1/1 Running 1 (57m ago) 47h rs-demo-x27fv 1/1 Running 1 (57m ago) 47h test 1/1 Running 7 (57m ago) 17d test1 1/1 Running 7 (57m ago) 17d test2 1/1 Running 7 (57m ago) 17d root@k8s-deploy:/yaml# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 18d ngx-nfs-svc NodePort 10.100.16.14 <none> 80:30013/TCP 15s ngx-nodeport-svc NodePort 10.100.209.225 <none> 80:30012/TCP 45h root@k8s-deploy:/yaml#
在nfs服務器上/pod-vol目錄下提供index.html文件
root@harbor:~# echo "this page from nfs server.." >> /pod-vol/index.html root@harbor:~# cat /pod-vol/index.html this page from nfs server.. root@harbor:~#
訪問pod,看看nfs服務器上的inde.html是否能夠正常訪問到?
root@k8s-deploy:/yaml# curl 192.168.0.35:30013 this page from nfs server.. root@k8s-deploy:/yaml#
提示:能夠看到訪問pod對應返回的頁面就是剛纔在nfs服務器上創建的頁面,說明pod正常掛載了nfs提供的目錄;
PV、PVC資源,詳細說明請參考https://www.cnblogs.com/qiuhom-1874/p/14188621.html;
nfs實現靜態pvc的使用
在nfs服務器上準備目錄
root@harbor:~# cat /etc/exports # /etc/exports: the access control list for filesystems which may be exported # to NFS clients. See exports(5). # # Example for NFSv2 and NFSv3: # /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check) # # Example for NFSv4: # /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check) # /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check) # /data/k8sdata/kuboard *(rw,no_root_squash) /data/volumes *(rw,no_root_squash) /pod-vol *(rw,no_root_squash) /data/k8sdata/myserver/myappdata *(rw,no_root_squash) root@harbor:~# mkdir -p /data/k8sdata/myserver/myappdata root@harbor:~# exportfs -av exportfs: /etc/exports [1]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/kuboard". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x exportfs: /etc/exports [2]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/volumes". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/pod-vol". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x exportfs: /etc/exports [4]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/myserver/myappdata". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x exporting *:/data/k8sdata/myserver/myappdata exporting *:/pod-vol exporting *:/data/volumes exporting *:/data/k8sdata/kuboard root@harbor:~#
創建pv
apiVersion: v1 kind: PersistentVolume metadata: name: myapp-static-pv namespace: default
spec: capacity: storage: 2Gi accessModes: - ReadWriteOnce nfs: path: /data/k8sdata/myserver/myappdata server: 192.168.0.42
創建pvc關聯pv
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myapp-static-pvc namespace: default spec: volumeName: myapp-static-pv accessModes: - ReadWriteOnce resources: requests: storage: 2Gi
創建pod使用pvc
apiVersion: apps/v1 kind: Deployment metadata: name: ngx-nfs-pvc-80 namespace: default labels: app: ngx-pvc-80 spec: selector: matchLabels: app: ngx-pvc-80 replicas: 1 template: metadata: labels: app: ngx-pvc-80 spec: containers: - name: ngx-pvc-80 image: "harbor.ik8s.cc/baseimages/nginx:v1" resources: requests: cpu: 100m memory: 100Mi limits: cpu: 100m memory: 100Mi ports: - containerPort: 80 name: ngx-pvc-80 volumeMounts: - name: localtime mountPath: /etc/localtime - name: data-pvc mountPath: /usr/share/nginx/html/ volumes: - name: localtime hostPath: path: /usr/share/zoneinfo/Asia/Shanghai - name: data-pvc persistentVolumeClaim: claimName: myapp-static-pvc --- apiVersion: v1 kind: Service metadata: name: ngx-pvc-svc namespace: default spec: selector: app: ngx-pvc-80 type: NodePort ports: - name: ngx-nfs-svc protocol: TCP port: 80 targetPort: 80 nodePort: 30014
應用上述配置清單
root@k8s-deploy:/yaml# kubectl apply -f nfs-static-pvc-demo.yaml persistentvolume/myapp-static-pv created persistentvolumeclaim/myapp-static-pvc created deployment.apps/ngx-nfs-pvc-80 created service/ngx-pvc-svc created root@k8s-deploy:/yaml# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE myapp-static-pv 2Gi RWO Retain Bound default/myapp-static-pvc 4s root@k8s-deploy:/yaml# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myapp-static-pvc Pending myapp-static-pv 0 7s root@k8s-deploy:/yaml# kubectl get pods NAME READY STATUS RESTARTS AGE deploy-demo-6849bdf444-pvsc9 1/1 Running 1 (151m ago) 47h deploy-demo-6849bdf444-sg8fz 1/1 Running 1 (151m ago) 47h ng-rc-l7xmp 1/1 Running 1 (151m ago) 2d1h ng-rc-wl5d6 1/1 Running 1 (151m ago) 2d1h ngx-nfs-pvc-80-f776bb6d-nwwwq 0/1 Pending 0 10s rs-demo-nzmqs 1/1 Running 1 (151m ago) 2d rs-demo-v2vb6 1/1 Running 1 (151m ago) 2d rs-demo-x27fv 1/1 Running 1 (151m ago) 2d test 1/1 Running 7 (151m ago) 18d test1 1/1 Running 7 (151m ago) 18d test2 1/1 Running 7 (151m ago) 18d root@k8s-deploy:/yaml#
在nfs服務器上/data/k8sdata/myserver/myappdata創建index.html,看看對應主頁是否能夠被訪問?
root@harbor:~# echo "this page from nfs-server /data/k8sdata/myserver/myappdata/index.html" >> /data/k8sdata/myserver/myappdata/index.html root@harbor:~# cat /data/k8sdata/myserver/myappdata/index.html this page from nfs-server /data/k8sdata/myserver/myappdata/index.html root@harbor:~#
訪問pod
root@harbor:~# curl 192.168.0.36:30014 this page from nfs-server /data/k8sdata/myserver/myappdata/index.html root@harbor:~#
nfs實現動態pvc的使用
創建名稱空間、服務賬號、clusterrole、clusterrolebindding、role、rolebinding
apiVersion: v1 kind: Namespace metadata: name: nfs --- apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: nfs --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["nodes"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: nfs roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: nfs rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: nfs subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: nfs roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io
創建sc
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-nfs-storage provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME' reclaimPolicy: Retain #PV的刪除策略,默認爲delete,刪除PV後立即刪除NFS server的數據 mountOptions: #- vers=4.1 #containerd有部分參數異常 #- noresvport #告知NFS客戶端在重新建立網絡連接時,使用新的傳輸控制協議源端口 - noatime #訪問文件時不更新文件inode中的時間戳,高併發環境可提高性能 parameters: #mountOptions: "vers=4.1,noresvport,noatime" archiveOnDelete: "true" #刪除pod時保留pod數據,默認爲false時爲不保留數據
創建provision
apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: nfs spec: replicas: 1 strategy: #部署策略 type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner #image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 image: registry.cn-qingdao.aliyuncs.com/zhangshijie/nfs-subdir-external-provisioner:v4.0.2 volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: k8s-sigs.io/nfs-subdir-external-provisioner - name: NFS_SERVER value: 192.168.0.42 - name: NFS_PATH value: /data/volumes volumes: - name: nfs-client-root nfs: server: 192.168.0.42 path: /data/volumes
調用sc創建pvc
apiVersion: v1 kind: Namespace metadata: name: myserver --- # Test PVC kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myserver-myapp-dynamic-pvc namespace: myserver spec: storageClassName: managed-nfs-storage #調用的storageclass 名稱 accessModes: - ReadWriteMany #訪問權限 resources: requests: storage: 500Mi #空間大小
創建app使用pvc
kind: Deployment #apiVersion: extensions/v1beta1 apiVersion: apps/v1 metadata: labels: app: myserver-myapp name: myserver-myapp-deployment-name namespace: myserver spec: replicas: 1 selector: matchLabels: app: myserver-myapp-frontend template: metadata: labels: app: myserver-myapp-frontend spec: containers: - name: myserver-myapp-container image: nginx:1.20.0 #imagePullPolicy: Always volumeMounts: - mountPath: "/usr/share/nginx/html/statics" name: statics-datadir volumes: - name: statics-datadir persistentVolumeClaim: claimName: myserver-myapp-dynamic-pvc --- kind: Service apiVersion: v1 metadata: labels: app: myserver-myapp-service name: myserver-myapp-service-name namespace: myserver spec: type: NodePort ports: - name: http port: 80 targetPort: 80 nodePort: 30015 selector: app: myserver-myapp-frontend
應用上述配置清單
root@k8s-deploy:/yaml/myapp# kubectl apply -f . namespace/nfs created serviceaccount/nfs-client-provisioner created clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created storageclass.storage.k8s.io/managed-nfs-storage created deployment.apps/nfs-client-provisioner created namespace/myserver created persistentvolumeclaim/myserver-myapp-dynamic-pvc created deployment.apps/myserver-myapp-deployment-name created service/myserver-myapp-service-name created root@k8s-deploy:
驗證:查看sc、pv、pvc是否創建?pod是否正常運行?
root@k8s-deploy:/yaml/myapp# kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE managed-nfs-storage k8s-sigs.io/nfs-subdir-external-provisioner Retain Immediate false 105s root@k8s-deploy:/yaml/myapp# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-01709c7f-0cf9-4554-9ae9-72db89e7308c 500Mi RWX Retain Bound myserver/myserver-myapp-dynamic-pvc managed-nfs-storage 107s root@k8s-deploy:/yaml/myapp# kubectl get pvc -n myserver NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myserver-myapp-dynamic-pvc Bound pvc-01709c7f-0cf9-4554-9ae9-72db89e7308c 500Mi RWX managed-nfs-storage 117s root@k8s-deploy:/yaml/myapp# kubectl get pods -n myserver NAME READY STATUS RESTARTS AGE myserver-myapp-deployment-name-65ff65446f-xpd5p 1/1 Running 0 2m8s root@k8s-deploy:/yaml/myapp#
提示:可以看到pv自動由sc創建,pvc自動和pv關聯;
驗證:在nfs服務器上的/data/volumes/下創建index.html文件,訪問pod service,看看對應文件是否能夠正常被訪問到?
root@harbor:/data/volumes# ls myserver-myserver-myapp-dynamic-pvc-pvc-01709c7f-0cf9-4554-9ae9-72db89e7308c root@harbor:/data/volumes# cd myserver-myserver-myapp-dynamic-pvc-pvc-01709c7f-0cf9-4554-9ae9-72db89e7308c/ root@harbor:/data/volumes/myserver-myserver-myapp-dynamic-pvc-pvc-01709c7f-0cf9-4554-9ae9-72db89e7308c# ls root@harbor:/data/volumes/myserver-myserver-myapp-dynamic-pvc-pvc-01709c7f-0cf9-4554-9ae9-72db89e7308c# echo "this page from nfs-server /data/volumes" >> index.html root@harbor:/data/volumes/myserver-myserver-myapp-dynamic-pvc-pvc-01709c7f-0cf9-4554-9ae9-72db89e7308c# cat index.html this page from nfs-server /data/volumes root@harbor:/data/volumes/myserver-myserver-myapp-dynamic-pvc-pvc-01709c7f-0cf9-4554-9ae9-72db89e7308c#
提示:在nfs服務器上的/data/volumes目錄下會自動生成一個使用pvcpod所在名稱空間+pvc名字+pv名字的一個目錄,這個目錄就是由provision創建;
訪問pod
root@harbor:~# curl 192.168.0.36:30015/statics/index.html this page from nfs-server /data/volumes root@harbor:~#
提示:能夠訪問到我們剛纔創建的文件,說明pod正常掛載nfs服務器對應目錄;
PV/PVC總結
PV是對底層網絡存儲的抽象,即將網絡存儲定義爲一種存儲資源,將一個整體的存儲資源拆分成多份後給不同的業務使用。
PVC是對PV資源的申請調用,pod是通過PVC將數據保存至PV,PV再把數據保存至真正的硬件存儲。
PersistentVolume參數
Capacity: #當前PV空間大小,kubectl explain PersistentVolume.spec.capacity
accessModes :訪問模式,#kubectl explain PersistentVolume.spec.accessModes
ReadWriteOnce – PV只能被單個節點以讀寫權限掛載,RWO
ReadOnlyMany – PV以可以被多個節點掛載但是權限是隻讀的,ROX
ReadWriteMany – PV可以被多個節點是讀寫方式掛載使用,RWX
persistentVolumeReclaimPolicy #刪除機制即刪除存儲卷卷時候,已經創建好的存儲卷由以下刪除操作:
Retain – 刪除PV後保持原裝,最後需要管理員手動刪除
Recycle – 空間回收,及刪除存儲捲上的所有數據(包括目錄和隱藏文件),目前僅支持NFS和hostPath
Delete – 自動刪除存儲卷
volumeMode #卷類型,kubectl explain PersistentVolume.spec.volumeMode;定義存儲卷使用的文件系統是塊設備還是文件系統,默認爲文件系統
mountOptions #附加的掛載選項列表,實現更精細的權限控制;
官方文檔:持久卷 | Kubernetes;
PersistentVolumeClaim創建參數
accessModes :PVC 訪問模式,#kubectl explain PersistentVolumeClaim.spec.volumeMode
ReadWriteOnce – PVC只能被單個節點以讀寫權限掛載,RWO
ReadOnlyMany – PVC以可以被多個節點掛載但是權限是隻讀的,ROX
ReadWriteMany – PVC可以被多個節點是讀寫方式掛載使用,RWX
resources: #定義PVC創建存儲卷的空間大小
selector: #標籤選擇器,選擇要綁定的PV
matchLabels #匹配標籤名稱
matchExpressions #基於正則表達式匹配
volumeName #要綁定的PV名稱
volumeMode #卷類型,定義PVC使用的文件系統是塊設備還是文件系統,默認爲文件系統
Volume- 存儲卷類型
static:靜態存儲卷 ,需要在使用前手動創建PV、然後創建PVC並綁定到PV然後掛載至pod使用,適用於PV和PVC相對比較固定的業務場景。
dynamin:動態存儲卷,先創建一個存儲類storageclass,後期pod在使用PVC的時候可以通過存儲類動態創建PVC,適用於有狀態服務集羣如MySQL一主多從、zookeeper集羣等。
存儲類官方文檔:存儲類 | Kubernetes