1、Pod控制器
1.1 介紹
Pod控制器是用於實現管理pod的中間層,確保pod資源符合預期的狀態,pod的資源出現故障時,會嘗試 進行重啓,當根據重啓策略無效,則會重新新建pod的資源。
Master 的各組件中, API Server 僅負責將資源存儲於 etcd 中,並將其變動通知給各相 關的客戶端程序,如 kubelet、 kube-scheduler、 kube-proxy 和 kube-controlier-manager 等, kube剖heduler 監控到處於未綁定狀態的 Pod 對象出現時遂啓動調度器爲其挑選適配的工作節 點,然而, Kubemetes 的核心功能之一還在於要確保各資源對象的當前狀態( status)以匹配用 戶期望的狀態( spec),使當前狀態不斷地向期望狀態“和解”( reconciliation)來完成容器應用 管理,而這些則是 kube-contro I !er-manager 的任務。 kube-contro lier-manager 是一個獨立的單體 守護進程,然而它包含了衆多功能不同的控制器類型分別用於各類和解任務,如圖 5-1 所示。
1.2 pod控制器有多種類型
- ReplicationController(RC):RC保證了在所有時間內,都有特定數量的Pod副本正在運行,如果太多了,RC就殺死幾個,如果太少了,RC會新建幾個
- ReplicaSet(RS):代用戶創建指定數量的pod副本數量,確保pod副本數量符合預期狀態,並且支持滾動式自動擴容和縮容功能。
- Deployment(重要):工作在ReplicaSet之上,用於管理無狀態應用,目前來說最好的控制器。支持滾動更新和回滾功能,還提供聲明式配置。
- DaemonSet:用於確保集羣中的每一個節點只運行特定的pod副本,通常用於實現系統級後臺任務。比如ELK服務
- Job:只要完成就立即退出,不需要重啓或重建。
- CronJob:週期性任務控制,不需要持續後臺運行
- StatefulSet:管理有狀態應用
本文主要講解ReplicaSet、Deployment、DaemonSet 三中類型的pod控制器。
2、ReplicaSet
2.1 認識ReplicaSet
(1)什麼是ReplicaSet?
ReplicaSet是下一代複本控制器,是Replication Controller(RC)的升級版本。ReplicaSet和 Replication Controller之間的唯一區別是對選擇器的支持。ReplicaSet支持labels user guide中描述的set-based選擇器要求, 而Replication Controller僅支持equality-based的選擇器要求。
ReplicaSet (簡稱 RS )是 Pod 控制器類型的一種實現,用於確保由其管控的 Pod 對象副 本數在任一時刻都能精確滿足期望的數量。 如圖 5-4 所示, ReplicaSet 控制器資源啓動後會 查找集羣中匹配其標籤選擇器的 Pod 資源對象,當前活動對象的數量與期望的數量不吻合 時, 多則刪除,少則通過 Pod 模板創建以補足, 等 Pod 資源副本數量符合期望值後即進入 下一輪和解循環。
(2)如何使用ReplicaSet
大多數kubectl 支持Replication Controller 命令的也支持ReplicaSets。rolling-update命令除外,如果要使用rolling-update,請使用Deployments來實現。
雖然ReplicaSets可以獨立使用,但它主要被 Deployments用作pod 機制的創建、刪除和更新。當使用Deployment時,你不必擔心創建pod的ReplicaSets,因爲可以通過Deployment實現管理ReplicaSets。
(3)何時使用ReplicaSet?
ReplicaSet能確保運行指定數量的pod。然而,Deployment 是一個更高層次的概念,它能管理ReplicaSets,並提供對pod的更新等功能。因此,我們建議你使用Deployment來管理ReplicaSets,除非你需要自定義更新編排。
這意味着你可能永遠不需要操作ReplicaSet對象,而是使用Deployment替代管理 。後續講解Deployment會詳細描述。
2.2 ReplicaSet定義資源清單幾個字段
- apiVersion: app/v1 版本
- kind: ReplicaSet 類型
- metadata 元數據
- spec 期望狀態
- minReadySeconds:應爲新創建的pod準備好的最小秒數
- replicas:副本數; 默認爲1
- selector:標籤選擇器 template:模板(必要的),用於補足 Pod 副本數量時使用的 Pod 模板資源
- metadata:模板中的元數據
- spec:模板中的期望狀態
- status 當前狀態
2.3 演示:創建一個簡單的ReplicaSet
(1)編寫yaml文件,並創建啓動
簡單創建一個replicaset:啓動2個pod
[root@master chapter5]# cat rs-example.yaml apiVersion: apps/v1 kind: ReplicaSet # 設置類型爲ReplicaSet模式 metadata: name: myapp-rs # pod名稱 namespace: prod # 創建一個空間 spec: replicas: 2 selector: matchLabels: # 定義pod控制器的標籤選擇器 app: myapp-pod # 控制器的標籤選擇器 template: metadata: labels: app: myapp-pod # 此處pod標籤名稱要與上面控制器的標籤選擇器一致 spec: containers: - name: myapp # 定義的myapp容器名稱 image: ikubernetes/myapp:v1 ports: - name: http containerPort: 80
[root@master chapter5]# kubectl apply -f rs-example.yaml # 創建pod replicaset.apps/myapp-rs created [root@master chapter5]# kubectl get rs -n prod # 查看ReplicaSet模式下創建的pod NAME DESIRED CURRENT READY AGE myapp-rs 2 2 2 21s # 此時myapp-rs容器下有兩個pod在運行狀態 [root@master chapter5]# kubectl describe rs myapp-rs -n prod # 查看pod的詳細信息 Name: myapp-rs Namespace: prod Selector: app=myapp-pod # 標籤選擇器 Labels: <none> Annotations: Replicas: 2 current / 2 desired Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=myapp-pod # pod的標籤 Containers: myapp: # 容器名稱 Image: ikubernetes/myapp:v1 Port: 80/TCP Host Port: 0/TCP
(3)生成pod原則:“多退少補”
① 刪除pod,會立即重新構建,生成新的pod
1
2
3
4
5
6
|
[root@master manifests] # kubectl delete pods myapp-zjc5l pod "myapp-k4j6h" deleted [root@master ~] # kubectl get pods NAME READY STATUS RESTARTS AGE myapp-r4ss4 1 /1 Running 0 33s myapp-mdjvh 1 /1 Running 0 10s |
② 若另一個pod,不小心符合了rs的標籤選擇器,就會隨機幹掉一個此標籤的pod
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
|
---隨便啓動一個pod [root@master manifests] # kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS myapp-hxgbh 1 /1 Running 0 7m app=myapp,environment=qa,release=canary myapp-mdjvh 1 /1 Running 0 6m app=myapp,environment=qa,release=canary pod- test 1 /1 Running 0 13s app=myapp,tier=frontend ---將pod- test 打上release=canary標籤 [root@master manifests] # kubectl label pods pod-test release=canary pod /pod-test labeled ---隨機停掉一個pod [root@master manifests] # kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS myapp-hxgbh 1 /1 Running 0 8m app=myapp,environment=qa,release=canary myapp-mdjvh 1 /1 Running 0 7m app=myapp,environment=qa,release=canary pod- test 0 /1 Terminating 0 1m app=myapp,release=canary,tier=frontend |
2.4 ReplicaSet動態擴容/縮容
實現Pod的動態擴容
1、動態擴容,只需要將replicas的數值改大點即可,就會實現動態擴容。
[root@master chapter5]# cat rs-example.yaml apiVersion: apps/v1 kind: ReplicaSet metadata: name: myapp-rs namespace: prod spec: replicas: 6 # 擴展副本只需要修改此處即可 selector: matchLabels: app: myapp-pod template: metadata: labels: app: myapp-pod spec: containers: - name: myapp image: ikubernetes/myapp:v1 ports: - name: http containerPort: 80
[root@master chapter5]# kubectl apply -f rs-example.yaml # 創建新的pod副本 [root@master chapter5]# kubectl get pods -n prod # 查看此時就會出現6個pod NAME READY STATUS RESTARTS AGE myapp-rs-6gsdr 1/1 Running 0 17s myapp-rs-bqxdx 1/1 Running 0 12m myapp-rs-hscwt 1/1 Running 0 12m myapp-rs-kkz4x 1/1 Running 0 17s myapp-rs-kkzq7 1/1 Running 0 17s myapp-rs-ng2fb 1/1 Running 0 17s pod-demo 2/2 Running 2 21h [root@master chapter5]# kubectl get pods -n prod -o wide # 查看pod的詳細信息 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-rs-6gsdr 1/1 Running 0 30s 10.224.2.18 node2 <none> <none> myapp-rs-bqxdx 1/1 Running 0 12m 10.224.1.17 node1 <none> <none> myapp-rs-hscwt 1/1 Running 0 12m 10.224.1.16 node1 <none> <none> myapp-rs-kkz4x 1/1 Running 0 30s 10.224.2.19 node2 <none> <none> myapp-rs-kkzq7 1/1 Running 0 30s 10.224.1.18 node1 <none> <none> myapp-rs-ng2fb 1/1 Running 0 30s 10.224.1.19 node1 <none> <none> pod-demo 2/2 Running 2 21h 10.224.1.14 node1 <none> <none>
基於命令實現pod動態縮容
使用scale選項對pod進行縮容,此方法也可以進行擴容,只需要在replicas副本數量改大即可
[root@master chapter5]# kubectl scale --replicas=3 rs myapp-rs -n prod # 將副本數改爲3個 replicaset.apps/myapp-rs scaled [root@master chapter5]# kubectl get pods -n prod # 查看此時的pod數量就只有三個 NAME READY STATUS RESTARTS AGE myapp-rs-bqxdx 1/1 Running 0 17m myapp-rs-hscwt 1/1 Running 0 17m myapp-rs-ng2fb 1/1 Running 0 5m39s pod-demo 2/2 Running 2 22h
2.5 ReplicaSet在線升級版本
1、修改repicaset的yaml文件格式,實現在線升級版本
[root@master chapter5]# cat rs-example.yaml apiVersion: apps/v1 kind: ReplicaSet metadata: name: myapp-rs namespace: prod spec: replicas: 1 # 將此時的副本先改爲1,方便測試 selector: matchLabels: app: myapp-pod template: metadata: labels: app: myapp-pod spec: containers: - name: myapp image: ikubernetes/myapp:v2 # 將下載鏡像的模板改爲v2 ports: - name: http containerPort: 80
2、但是,修改完並沒有升級
需刪除pod,再自動生成新的pod時,就會升級成功;
即可以實現灰度發佈:刪除一個,會自動啓動一個版本升級成功的pod
[root@master chapter5]# kubectl apply -f rs-example.yaml [root@master chapter5]# kubectl get pods -n prod # 查看最新的pod信息 NAME READY STATUS RESTARTS AGE myapp-rs-hscwt 1/1 Running 0 45m pod-demo 2/2 Running 2 22h ---訪問沒有刪除pod的服務,顯示是V1版 [root@master manifests]# curl 10.244.2.15 Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a> # 訪問第一次的版本是v1的版本 [root@master chapter5]# kubectl delete pod -n prod myapp-rs-hscwt # 刪除v1版本的pod,就會生成最新版本的pod pod "myapp-rs-hscwt" deleted [root@master chapter5]# kubectl get pods -n prod -o wide # 可以看到此時的容器IP地址爲10.224.2.22 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-rs-ltqjt 1/1 Running 0 5m53s 10.224.2.22 node2 <none> <none> pod-demo 2/2 Running 2 22h 10.224.1.14 node1 <none> <none> [root@master chapter5]# kubectl get rs -n prod -o wide # 查看此時創建的容器版本爲v2 NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR myapp-rs 1 1 1 52m myapp ikubernetes/myapp:v2 app=myapp-pod [root@master chapter5]# curl 10.224.2.22 # 此時訪問網頁時,就會變成v2版本的代碼,實現了代碼的灰度發佈 Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
[root@master manifests]# kubectl delete all --all -n prod
3、Deployment
3.1 Deployment簡述
(1)介紹
Deployment 爲 Pod和Replica Set 提供了一個聲明式定義(declarative)方法,用來替代以前的ReplicationController來方便的管理應用
你只需要在 Deployment 中描述您想要的目標狀態是什麼,Deployment controller 就會幫您將 Pod 和ReplicaSet 的實際狀態改變到您的目標狀態。您可以定義一個全新的 Deployment 來創建 ReplicaSet 或者刪除已有的 Deployment 並創建一個新的來替換。
注意:您不該手動管理由 Deployment 創建的 Replica Set,否則您就篡越了 Deployment controller 的職責!
(2)典型的應用場景包括
- 使用Deployment來創建ReplicaSet。ReplicaSet在後臺創建pod。檢查啓動狀態,看它是成功還是失敗。
- 然後,通過更新Deployment 的 PodTemplateSpec 字段來聲明Pod的新狀態。這會創建一個新的ReplicaSet,Deployment會按照控制的速率將pod從舊的ReplicaSet移動到新的ReplicaSet中。
- 滾動升級和回滾應用:如果當前狀態不穩定,回滾到之前的Deployment revision。每次回滾都會更新Deployment的revision。
- 擴容和縮容:擴容Deployment以滿足更高的負載。
- 暫停和繼續Deployment:暫停Deployment來應用PodTemplateSpec的多個修復,然後恢復上線。
- 根據Deployment 的狀態判斷上線是否hang住了。
- 清除舊的不必要的 ReplicaSet。
3.2 Deployment定義資源清單幾個字段
- apiVersion: app/v1 版本
- kind: Deployment 類型
- metadata 元數據
- spec 期望狀態
- --------------replicaset 也有的選項---------------
- minReadySeconds:應爲新創建的pod準備好的最小秒數
- replicas:副本數; 默認爲1
- selector:標籤選擇器
- template:模板(必須的)
- metadata:模板中的元數據
- spec:模板中的期望狀態
- --------------deployment 獨有的選項---------------
- strategy:更新策略;用於將現有pod替換爲新pod的部署策略
- Recreate:重新創建
- RollingUpdate:滾動更新
- maxSurge:可以在所需數量的pod之上安排的最大pod數;例如:5、10%
- maxUnavailable:更新期間可用的最大pod數;
- revisionHistoryLimit:要保留以允許回滾的舊ReplicaSet的數量,默認10
- paused:表示部署已暫停,部署控制器不處理部署
- progressDeadlineSeconds:在執行此操作之前部署的最長時間被認爲是失敗的
- status 當前狀態
3.3 演示:創建一個簡單的Deployment
(1)創建並創建一個簡單的DaemonSet,啓動pod,只後臺運行filebeat手機日誌服務
[root@master manifests]# vim ds-demo.yaml apiVersion: apps/v1 kind: DaemonSet metadata: name: filebeat-ds # pod名稱爲filebeat-ds namespace: prod # 名稱空間改爲prod,實驗時以示區別 spec: selector: matchLabels: app: filebeat release: stable template: metadata: labels: app: filebeat release: stable spec: containers: - name: filebeat image: ikubernetes/filebeat:5.6.5-alpine # 拉取5.6.5版本的鏡像 env: - name: REDIS_HOST value: redis.default.svc.cluster.local - name: REDIS_LOG_LEVEL value: info
(2)創建pod並查詢驗證
[root@master manifests]# kubectl apply -f filebeat-ds.yaml # 創建pod daemonset.apps/filebeat-ds created [root@master manifests]# kubectl get ds -n prod -o wide # 查看此時的鏡像是5.6.5版本 NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR filebeat-ds 2 2 2 2 2 <none> 7m7s filebeat ikubernetes/filebeat:5.6.5-alpine app=filebeat,release=stable
3.4 Deployment動態擴容/縮容
有2中方法實現
(1)方法1:直接修改yaml文件,將副本數改爲3
1
2
3
4
5
6
7
|
[root@master manifests] # vim deploy-damo.yaml ... ... spec: replicas: 3 ... ... [root@master manifests] # kubectl apply -f deploy-damo.yaml deployment.apps /myapp-deploy configured |
查詢驗證成功:有3個pod
1
2
3
4
5
|
[root@master manifests] # kubectl get pods NAME READY STATUS RESTARTS AGE myapp-deploy-69b47bc96d-bcdnq 1 /1 Running 0 25s myapp-deploy-69b47bc96d-bm8zc 1 /1 Running 0 2m myapp-deploy-69b47bc96d-pjr5v 1 /1 Running 0 2m |
(2)通過patch命令打補丁命令擴容
與方法1的區別:不需修改yaml文件;平常測試時使用方便;
但列表格式複雜,極容易出錯
1
2
|
[root@master manifests] # kubectl patch deployment myapp-deploy -p '{"spec":{"replicas":5}}' deployment.extensions /myapp-deploy patched |
查詢驗證成功:有5個pod
1
2
3
4
5
6
7
|
[root@master ~] # kubectl get pods NAME READY STATUS RESTARTS AGE myapp-deploy-67f6f6b4dc-2756p 1 /1 Running 0 26s myapp-deploy-67f6f6b4dc-2lkwr 1 /1 Running 0 26s myapp-deploy-67f6f6b4dc-knttd 1 /1 Running 0 21m myapp-deploy-67f6f6b4dc-ms7t2 1 /1 Running 0 21m myapp-deploy-67f6f6b4dc-vl2th 1 /1 Running 0 21m |
3.5 Deployment在線升級版本
(1)直接修改deploy-damo.yaml配置文件,如果修改完版本會重新使用apply進行更新版本,如果只是修改了replicas副本數,是不會更新版本的
[root@master manifests]# cat deploy-demo.yaml apiVersion: apps/v1 kind: Deployment metadata: name: ngx-deploy namespace: prod spec: replicas: 2 selector: matchLabels: app: myapp release: canary template: metadata: labels: app: myapp release: canary spec: containers: - name: myapp image: ikubernetes/myapp:v2 ports: - name: http containerPort: 80
(2)重新創建一個pod,並動態監控版本升級
[root@master manifests]# kubectl apply -f deploy-demo.yaml # 重新創建一個pod deployment.apps/ngx-deploy configured [root@master manifests]# kubectl get pods -n prod -w # 實現實施跟蹤創建新的pod過程
發現是滾動升級,先停一個,再開啓一個新的(升級);再依次停一個...
(3)此時查看所有的pod信息,就可以看到有兩個pod在運行,隨便訪問一個,此時版本已經變成v2版本
[root@master manifests]# kubectl get pods -n prod -o wide # 查看pod的詳細信息,可以看到有兩個pod在運行 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ngx-deploy-559ff5c66-7fbgg 1/1 Running 0 3m42s 10.224.2.24 node2 <none> <none> ngx-deploy-559ff5c66-q65bq 1/1 Running 0 3m44s 10.224.1.24 node1 <none> <none> [root@master manifests]# curl 10.224.2.24 # 查看此時的版本是v2版本 Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a> [root@master manifests]# curl 10.224.1.24 Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
3.6 Deployment修改版本更新策略
(1)方法1:修改yaml文件
1
2
3
4
5
6
7
|
[root@master manifests] # vim deploy-damo.yaml ... ... strategy: rollingUpdate: maxSurge: 1 #每次更新一個pod maxUnavailable: 0 #最大不可用pod爲0 ... ... |
(2)打補丁:修改更新策略
1
2
|
[root@master manifests] # kubectl patch deployment myapp-deploy -p '{"spec":{"strategy":{"rollingUpdate":{"maxSurge":1,"maxUnavailable":0}}}}' deployment.extensions /myapp-deploy patched |
(3)驗證:查詢詳細信息
1
2
3
4
|
[root@master manifests] # kubectl describe deployment myapp-deploy ... ... RollingUpdateStrategy: 0 max unavailable, 1 max surge ... ... |
(4)升級到v3版
① 金絲雀發佈:先更新一個pod,然後立即暫停,等版本運行沒問題了,再繼續發佈
1
2
3
|
[root@master manifests] # kubectl set image deployment myapp-deploy myapp=ikubernetes/myapp:v3 && kubectl rollout pause deployment myapp-deploy deployment.extensions /myapp-deploy image updated #一個pod更新成功 deployment.extensions /myapp-deploy paused #暫停更新 |
② 等版本運行沒問題了,解除暫停,繼續發佈更新
1
2
|
[root@master manifests] # kubectl rollout resume deployment myapp-deploy deployment.extensions /myapp-deploy resumed |
③ 中間可以一直監控過程
1
2
3
4
5
6
7
8
9
10
|
[root@master ~] # kubectl rollout status deployment myapp-deploy #輸出版本更新信息 Waiting for deployment "myapp-deploy" rollout to finish: 1 out of 5 new replicas have been updated... Waiting for deployment spec update to be observed... Waiting for deployment spec update to be observed... Waiting for deployment "myapp-deploy" rollout to finish: 1 out of 5 new replicas have been updated... Waiting for deployment "myapp-deploy" rollout to finish: 1 out of 5 new replicas have been updated... Waiting for deployment "myapp-deploy" rollout to finish: 2 out of 5 new replicas have been updated... Waiting for deployment "myapp-deploy" rollout to finish: 2 out of 5 new replicas have been updated... ---也可以使用get查詢pod 更新過程 [root@master ~] # kubectl get pods -w |
④ 驗證:隨便訪問一個pod的服務,版本升級成功
1
2
3
4
5
|
[root@master ~] # kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE myapp-deploy-6bdcd6755d-2bnsl 1 /1 Running 0 1m 10.244.1.77 node1 [root@master ~] # curl 10.244.1.77 Hello MyApp | Version: v3 | <a href= "hostname.html" >Pod Name< /a > |
3.7 Deployment版本回滾
增加減少pod的數量,不會使代碼進行滾動更新,滾動更新只會在你修改pod的模板時纔會發生改變,修改了鏡像版本,必然會觸發模板更新,也就會是版本更新。
(1)命令
查詢版本變更歷史
# kubectl rollout history deployment deployment_name
undo回滾版本;--to-revision= 回滾到第幾版本,默認等於0,是回滾到上一個版本
# kubectl rollout undo deployment deployment_name --to-revision=N
演示:
(1)查看歷史版本,並開始回滾版本,如果想回到第一個版本,只需要將--to-revision= 1選項加在最後面即可
[root@master manifests]# kubectl rollout history deployment/ngx-deploy -n prod # 查看歷史代碼版本 deployment.apps/ngx-deploy REVISION CHANGE-CAUSE 3 <none> 5 <none> 6 kubectl apply --filename=deploy-demo.yaml --record=true [root@master manifests]# kubectl rollout undo deployment/ngx-deploy -n prod # 回滾到上一個版本 deployment.apps/ngx-deploy rolled back
(2)查看此時回滾的狀態,此時的pod控制器在v2版本上
[root@master manifests]# kubectl get rs -n prod -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR ngx-deploy-559ff5c66 2 2 2 51m myapp ikubernetes/myapp:v2 app=myapp,pod-template-hash=559ff5c66,release=canary ngx-deploy-65fb6c8459 0 0 0 61m myapp ikubernetes/myapp:v1 app=myapp,pod-template-hash=65fb6c8459,release=canary ngx-deploy-6b9865d969 0 0 0 14m myapp ikubernetes/myapp:v3 app=myapp,pod-template-hash=6b9865d969,release=canary
4.1 DaemonSet簡述
(1)DemonSet介紹
DaemonSet保證在每個Node上都運行一個容器副本,常用來部署一些集羣的日誌、監控或者其他系統管理應用
(2)典型的應用包括
- 日誌收集,比如fluentd,logstash等
- 系統監控,比如Prometheus Node Exporter,collectd,New Relic agent,Ganglia gmond等
- 系統程序,比如kube-proxy, kube-dns, glusterd, ceph等
4.2 DaemonSet定義資源清單幾個字段
- apiVersion: app/v1 版本
- kind: DaemonSet 類型
- metadata 元數據
- spec 期望狀態
- --------------replicaset 也有的選項---------------
- minReadySeconds:應爲新創建的pod準備好的最小秒數
- selector:標籤選擇器
- template:模板(必須的)
- metadata:模板中的元數據
- spec:模板中的期望狀態
- --------------daemonset 獨有的選項---------------
- revisionHistoryLimit:要保留以允許回滾的舊ReplicaSet的數量,默認10
- updateStrategy:用新pod替換現有DaemonSet pod的更新策略
- status 當前狀態
4.3 演示:創建一個簡單的DaemonSet
(1)創建並創建一個簡單的DaemonSet,啓動pod,只後臺運行filebeat手機日誌服務
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
|
[root@master manifests] # vim ds-demo.yaml apiVersion: apps /v1 kind: DaemonSet metadata: name: filebeat-ds namespace: default spec: selector: matchLabels: app: filebeat release: stable template: metadata: labels: app: filebeat release: stable spec: containers: - name: filebeat image: ikubernetes /filebeat :5.6.5-alpine env : - name: REDIS_HOST value: redis.default.svc.cluster. local - name: REDIS_LOG_LEVEL value: info [root@master manifests] # kubectl apply -f ds-demo.yaml daemonset.apps /myapp-ds created |
(2)查詢驗證
1
2
3
4
5
6
7
8
9
10
11
|
[root@master ~] # kubectl get ds NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE filebeat-ds 2 2 2 2 2 <none> 6m [root@master ~] # kubectl get pods NAME READY STATUS RESTARTS AGE filebeat-ds-r25hh 1 /1 Running 0 4m filebeat-ds-vvntb 1 /1 Running 0 4m [root@master ~] # kubectl exec -it filebeat-ds-r25hh -- /bin/sh / # ps aux PID USER TIME COMMAND 1 root 0:00 /usr/local/bin/filebeat -e -c /etc/filebeat/filebeat .yml |
4.4 DaemonSet動態版本升級
(1)使用kubectl set image 命令更新pod的鏡像;實現版本升級
1
2
|
[root@master ~] # kubectl set image daemonsets filebeat-ds filebeat=ikubernetes/filebeat:5.6.6-alpine daemonset.extensions /filebeat-ds image updated |
(2)驗證,升級成功
1
2
3
|
[root@master ~] # kubectl get ds -o wide NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR filebeat-ds 2 2 2 2 2 <none> 7m filebeat ikubernetes /filebeat :5.6.6-alpine app=filebeat,release=stable |
5、StatefulSet
5.1 認識statefulset
(1)statefulset介紹
StatefulSet是爲了解決有狀態服務的問題(對應Deployments和ReplicaSets是爲無狀態服務而設計),其應用場景包括
- 穩定的持久化存儲,即Pod重新調度後還是能訪問到相同的持久化數據,基於PVC來實現
- 穩定的網絡標誌,即Pod重新調度後其PodName和HostName不變,基於Headless Service(即沒有Cluster IP的Service)來實現
- 有序部署,有序擴展,即Pod是有順序的,在部署或者擴展的時候要依據定義的順序依次依次進行(即從0到N-1,在下一個Pod運行之前所有之前的Pod必須都是Running和Ready狀態),基於init containers來實現
- 有序收縮,有序刪除(即從N-1到0)
- 有序而自動地滾動更新。
(2)三個必要組件
從上面的應用場景可以發現,StatefulSet由以下幾個部分組成:
- 用於定義網絡標誌(DNS domain)的 Headless Service(無頭服務)
- 定義具體應用的StatefulSet控制器
- 用於創建PersistentVolumes 的 volumeClaimTemplates存儲卷模板
其中, Headless Service 用於爲 Pod 資源標識符生成可 解析的 DNS 資源記錄, StatefulSet 用於管控 Pod 資源, volummeClaimTemplate 則基於靜態或 動態的 PV 供給方式爲 Pod 資源提供專有且固定的存儲。
Kubemetes 自 1.7 版本起還支持用戶自定義更新策略, 該版本兼容支持之前版本中的刪除 後更新(OnDelete)策略, 以及新的滾動更新策略(RollingUpdate)。 OnDelete 意味着 ReplicaSet 不會自動更新 Pod 資源除非它被刪而激活重建操作。 RollingUpdate 是默認的更新策略, 它 支持 Pod 資源的 自動、滾動更新。 更新順 Pod web-0 序與終止 Pod 資源的順序相同, 由索引號 最大的資源開始, 終止一個並完成其更新, 而後更新下一個。
5.2 通過statefulset創建pod
5.2.1 安裝nfs服務器,並共享指定目錄
1、在每臺k8s集羣和nfs服務器上安裝nfs服務,並掛載要共享的目錄
(1)在nfs服務器上先建立存儲卷對應的目錄
1
2
3
4
5
6
7
8
9
|
[root@nfs ~] # cd /data/volumes/ [root@nfs volumes] # mkdir v{1,2,3,4,5} [root@nfs volumes] # ls index.html v1 v2 v3 v4 v5 [root@nfs volumes] # echo "<h1>NFS stor 01</h1>" > v1/index.html [root@nfs volumes] # echo "<h1>NFS stor 02</h1>" > v2/index.html [root@nfs volumes] # echo "<h1>NFS stor 03</h1>" > v3/index.html [root@nfs volumes] # echo "<h1>NFS stor 04</h1>" > v4/index.html [root@nfs volumes] # echo "<h1>NFS stor 05</h1>" > v5/index.html |
(2)修改nfs的配置
1
2
3
4
5
6
|
[root@nfs volumes] # vim /etc/exports /data/volumes/v1 192.168.130.0 /24 (rw,no_root_squash) /data/volumes/v2 192.168.130.0 /24 (rw,no_root_squash) /data/volumes/v3 192.168.130.0 /24 (rw,no_root_squash) /data/volumes/v4 192.168.130.0 /24 (rw,no_root_squash) /data/volumes/v5 192.168.130.0 /24 (rw,no_root_squash) |
(3)查看nfs的配置
1
2
3
4
5
6
|
[root@nfs volumes] # exportfs -arv exporting 192.168.130.0 /24 : /data/volumes/v5 exporting 192.168.130.0 /24 : /data/volumes/v4 exporting 192.168.130.0 /24 : /data/volumes/v3 exporting 192.168.130.0 /24 : /data/volumes/v2 exporting 192.168.130.0 /24 : /data/volumes/v1 |
(4)是配置生效
1
2
3
4
5
6
7
|
[root@nfs volumes] # showmount -e Export list for nfs: /data/volumes/v5 192.168.7.0 /24 /data/volumes/v4 192.168.7.0 /24 /data/volumes/v3 192.168.7.0 /24 /data/volumes/v2 192.168.7.0 /24 /data/volumes/v1 192.168.7.0 /24 |
(5)啓動nfs服務器
# systemctl start nfs
5.2.1 創建準備pv
創建5個pv,需要有nfs服務,pv與pvc詳情見:https://www.cnblogs.com/struggle-1216/p/13423505.html
[root@master volume]# vim pv-demo.yaml apiVersion: v1 kind: PersistentVolume metadata: name: pv001 labels: name: pv001 spec: nfs: path: /data/volumes/v1 server: nfs accessModes: ["ReadWriteMany","ReadWriteOnce","ReadOnlyMany"] volumeMode: Filesystem persistentVolumeReclaimPolicy: Retain # 定義回收策略爲retain capacity: storage: 5Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv002 labels: name: pv002 spec: nfs: path: /data/volumes/v2 server: nfs accessModes: ["ReadWriteMany","ReadWriteOnce","ReadOnlyMany"] volumeMode: Filesystem persistentVolumeReclaimPolicy: Retain # 定義回收策略 capacity: storage: 5Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv003 labels: name: pv003 spec: nfs: path: /data/volumes/v3 server: nfs accessModes: ["ReadWriteMany","ReadWriteOnce","ReadOnlyMany"] volumeMode: Filesystem persistentVolumeReclaimPolicy: Retain capacity: storage: 5Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv004 labels: name: pv004 spec: nfs: path: /data/volumes/v4 server: nfs accessModes: ["ReadWriteMany","ReadWriteOnce","ReadOnlyMany"] volumeMode: Filesystem persistentVolumeReclaimPolicy: Retain capacity: storage: 10Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv005 labels: name: pv005 spec: nfs: path: /data/volumes/v5 server: nfs accessModes: ["ReadWriteMany","ReadWriteOnce","ReadOnlyMany"] volumeMode: Filesystem persistentVolumeReclaimPolicy: Retain capacity: storage: 15Gi [root@master volume]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv001 5Gi RWO,RWX Retain Available 3s pv002 5Gi RWO Retain Available 3s pv003 5Gi RWO,RWX Retain Available 3s pv004 10Gi RWO,RWX Retain Available 3s pv005 15Gi RWO,RWX Retain Available 3s
5.2.2 編寫使用statefulset創建pod的資源清單,並創建
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
|
[root@master pod_controller] # vim statefulset-demo.yaml #Headless Service apiVersion: v1 kind: Service metadata: name: myapp labels: app: myapp spec: ports: - port: 80 name: web clusterIP: None selector: app: myapp-pod --- #statefuleset apiVersion: apps /v1 kind: StatefulSet metadata: name: myapp spec: serviceName: myapp replicas: 3 selector: matchLabels: app: myapp-pod template: metadata: labels: app: myapp-pod spec: containers: - name: myapp image: ikubernetes /myapp :v1 ports: - containerPort: 80 name: web volumeMounts: - name: myappdata mountPath: /usr/share/nginx/html #volumeClaimTemplates volumeClaimTemplates: - metadata: name: myappdata spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 5Gi [root@master pod_controller] # kubectl apply -f statefulset-demo.yaml service /myapp created statefulset.apps /myapp created |
5.2.3 查詢並驗證pod
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
|
---無頭服務的service創建成功 [root@master pod_controller] # kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443 /TCP 173d myapp ClusterIP None <none> 80 /TCP 3s ---statefulset創建成功 [root@master pod_controller] # kubectl get sts NAME DESIRED CURRENT AGE myapp 3 3 6s ---查看pvc,已經成功綁定時候的pv [root@master pod_controller] # kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myappdata-myapp-0 Bound pv002 5Gi RWO 9s myappdata-myapp-1 Bound pv001 5Gi RWO,RWX 8s myappdata-myapp-2 Bound pv003 5Gi RWO,RWX 6s ---查看pv,有3個已經被綁定 [root@master pod_controller] # kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv001 5Gi RWO,RWX Retain Bound default /myappdata-myapp-1 21s pv002 5Gi RWO Retain Bound default /myappdata-myapp-0 21s pv003 5Gi RWO,RWX Retain Bound default /myappdata-myapp-2 21s pv004 10Gi RWO,RWX Retain Available 21s pv005 15Gi RWO,RWX Retain Available 21s ---啓動了3個pod [root@master pod_controller] # kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE myapp-0 1 /1 Running 0 16s 10.244.1.127 node1 myapp-1 1 /1 Running 0 15s 10.244.2.124 node2 myapp-2 1 /1 Running 0 13s 10.244.1.128 node1 |
5.3 statefulset動態擴容和縮容
可以使用scale命令 或 patch打補丁兩種方法實現。
5.3.1 擴容
由原本的3個pod擴容到5個
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
|
---①使用scale命令實現 [root@master ~] # kubectl scale sts myapp --replicas=5 statefulset.apps /myapp scaled ---②或者通過打補丁來實現 [root@master ~] # kubectl patch sts myapp -p '{"spec":{"replicas":5}}' statefulset.apps /myapp patched [root@master pod_controller] # kubectl get pods NAME READY STATUS RESTARTS AGE myapp-0 1 /1 Running 0 11m myapp-1 1 /1 Running 0 11m myapp-2 1 /1 Running 0 11m myapp-3 1 /1 Running 0 9s myapp-4 1 /1 Running 0 7s [root@master pod_controller] # kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myappdata-myapp-0 Bound pv002 5Gi RWO 11m myappdata-myapp-1 Bound pv001 5Gi RWO,RWX 11m myappdata-myapp-2 Bound pv003 5Gi RWO,RWX 11m myappdata-myapp-3 Bound pv004 10Gi RWO,RWX 13s myappdata-myapp-4 Bound pv005 15Gi RWO,RWX 11s [root@master pod_controller] # kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv001 5Gi RWO,RWX Retain Bound default /myappdata-myapp-1 17m pv002 5Gi RWO Retain Bound default /myappdata-myapp-0 17m pv003 5Gi RWO,RWX Retain Bound default /myappdata-myapp-2 17m pv004 10Gi RWO,RWX Retain Bound default /myappdata-myapp-3 17m pv005 15Gi RWO,RWX Retain Bound default /myappdata-myapp-4 17m |
5.3.2 縮容
由5個pod擴容到2個
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
---①使用scale命令 [root@master ~] # kubectl scale sts myapp --replicas=2 statefulset.apps /myapp scaled ---②通過打補丁的方法進行縮容 [root@master ~] # kubectl patch sts myapp -p '{"spec":{"replicas":2}}' statefulset.apps /myapp patched [root@master pod_controller] # kubectl get pods NAME READY STATUS RESTARTS AGE myapp-0 1 /1 Running 0 15m myapp-1 1 /1 Running 0 15m ---但是pv和pvc不會被刪除,從而實現持久化存儲 [root@master pod_controller] # kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myappdata-myapp-0 Bound pv002 5Gi RWO 15m myappdata-myapp-1 Bound pv001 5Gi RWO,RWX 15m myappdata-myapp-2 Bound pv003 5Gi RWO,RWX 15m myappdata-myapp-3 Bound pv004 10Gi RWO,RWX 4m myappdata-myapp-4 Bound pv005 15Gi RWO,RWX 4m |
5.4 版本升級
5.4.1 升級配置介紹:rollingUpdate.partition 分區更新
1
2
3
4
5
6
7
|
[root@master ~] # kubectl explain sts.spec.updateStrategy.rollingUpdate.partition KIND: StatefulSet VERSION: apps /v1 FIELD: partition <integer> DESCRIPTION: Partition indicates the ordinal at which the StatefulSet should be partitioned. Default value is 0. |
解釋:partition分區指定爲n,升級>n的分區;n指第幾個容器;默認是0
可以修改yaml資源清單來進行升級;也可通過打補丁的方法升級。
5.4.2 進行“金絲雀”升級
(1)先升級一個pod
先將pod恢復到5個
① 打補丁,將partition的指設爲4,就只升級第4個之後的pod;若新版本有問題,立即回滾;若沒問題,就全面升級
1
2
3
4
5
6
7
8
9
10
11
|
[root@master ~] # kubectl patch sts myapp -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":4}}}}' statefulset.apps /myapp patched ---查詢認證 [root@master ~] # kubectl describe sts myapp Name: myapp Namespace: default ... ... Replicas: 5 desired | 5 total Update Strategy: RollingUpdate Partition: 4 ... ... |
第二種實現升級第4個之後pod的方法,比較麻煩就是需要在現網修改配置文件生成
[root@master statefulset]# kubectl edit sts myapp updateStrategy: rollingUpdate: partition: 4 # 將此處的修改並保存
② 升級並查看此時的pod版本是否是最新版本v2
1
2
3
4
5
6
|
[root@master ~] # kubectl set image sts/myapp myapp=ikubernetes/myapp:v2 statefulset.apps /myapp image updated ---已將pod鏡像換位v2版 [root@master ~] # kubectl get sts -o wide NAME DESIRED CURRENT AGE CONTAINERS IMAGES myapp 5 5 21h myapp ikubernetes /myapp :v2 |
③ 驗證,查看第四個pod已經是v2版本
[root@master statefulset]# kubectl get pods # 查看所有的Pod,其實有五個pod,修改的是4,實際升級的是第五個Pod NAME READY STATUS RESTARTS AGE myapp-0 1/1 Running 0 2m16s myapp-1 1/1 Running 0 2m19s myapp-2 1/1 Running 0 2m32s myapp-3 1/1 Running 0 2m39s myapp-4 1/1 Running 0 4m21s
# 查看第4個之後的pod,已經完成升級 [root@master ~]# kubectl get pods myapp-4 -o yaml |grep image - image: ikubernetes/myapp:v2 # 查看前4個pod,都還是v1版本 [root@master ~]# kubectl get pods myapp-3 -o yaml |grep image - image: ikubernetes/myapp:v1 [root@master ~]# kubectl get pods myapp-0 -o yaml |grep image - image: ikubernetes/myapp:v1
(2)上面實現了金絲雀發佈成功後,就需要全面升級剩下的pod,此時只需要將partition對應的值改爲0即可
---只需將partition的指設爲0即可 [root@master ~]# kubectl patch sts myapp -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":0}}}}' statefulset.apps/myapp patched ---驗證,所有pod已經完成升級 [root@master ~]# kubectl get pods myapp-0 -o yaml |grep image - image: ikubernetes/myapp:v2 [root@master statefulset]# kubectl get pods myapp-1 -o yaml |grep image - image: ikubernetes/myapp:v2 [root@master ~]# kubectl get pods myapp-2 -o yaml |grep image - image: ikubernetes/myapp:v2 [root@master ~]# kubectl get pods myapp-3 -o yaml |grep image - image: ikubernetes/myapp:v2 [root@master ~]# kubectl get pods myapp-4 -o yaml |grep image - image: ikubernetes/myapp:v2