Kubernetes Deployment 實操整理

Deployment

用於部署無狀態的服務。一般不直接管理Pod或者ReplicaSet。

創建 Deployment

Deployment 文件

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx #deployment 的標籤
spec:
  replicas: 3 #副本數
  selector:
    matchLabels:
      app: nginx # 定義了 Deployment 如何查找要管理的 Pod。
  template:
    metadata:
      labels:
        app: nginx # pod容器的標籤
    spec:
      containers:
      - name: nginx # 容器名字
        image: nginx:1.14.2
        ports:
        - containerPort: 80

通過文件創建

[root@master01 deployment]# ls
nginx-deployment.yaml
[root@master01 deployment]# kubectl apply -f nginx-deployment.yaml 
deployment.apps/nginx-deployment created
kubectl apply -f nginx-deployment.yaml

查看創建狀態

[root@master01 deployment]# kubectl get deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   2/3     3            2           30s
You have new mail in /var/spool/mail/root
[root@master01 deployment]# kubectl get deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   2/3     3            2           33s
[root@master01 deployment]# kubectl get deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   2/3     3            2           34s
[root@master01 deployment]# kubectl get deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           35s

查看 Deployment 上線狀態

[root@master01 deployment]# kubectl rollout status deployment nginx-deployment
deployment "nginx-deployment" successfully rolled out

查看 Deployment 創建的 ReplicaSet(rs)

[root@master01 deployment]# kubectl get rs
NAME                          DESIRED   CURRENT   READY   AGE
nginx-deployment-66b6c48dd5   3         3         3       3m43s 

注意 ReplicaSet 的名稱始終被格式化爲[Deployment名稱]-[哈希]。 其中的哈希字符串與 ReplicaSet 上的 pod-template-hash 標籤一致

[root@master01 deployment]# kubectl get pod
NAME                                READY   STATUS    RESTARTS   AGE
nginx                               1/1     Running   1          2d20h
nginx-deployment-66b6c48dd5-d5snq   1/1     Running   0          4m59s
nginx-deployment-66b6c48dd5-rh5v9   1/1     Running   0          4m59s
nginx-deployment-66b6c48dd5-rqzrf   1/1     Running   0          4m59s

查看每個 Pod 自動生成的標籤

[root@master01 deployment]# kubectl get pods --show-labels
NAME                                READY   STATUS    RESTARTS   AGE     LABELS
nginx                               1/1     Running   1          2d20h   app=nginx,role=frontend
nginx-deployment-66b6c48dd5-d5snq   1/1     Running   0          6m8s    app=nginx,pod-template-hash=66b6c48dd5
nginx-deployment-66b6c48dd5-rh5v9   1/1     Running   0          6m8s    app=nginx,pod-template-hash=66b6c48dd5
nginx-deployment-66b6c48dd5-rqzrf   1/1     Running   0          6m8s    app=nginx,pod-template-hash=66b6c48dd5

Pod-template-hash 不要更改此標籤。

更新 Deployment

僅當 Deployment Pod 模板(即 .spec.template)發生改變時,例如模板的標籤或容器鏡像被更新, 纔會觸發 Deployment 上線。其他更新(如對 Deployment 執行擴縮容的操作)不會觸發上線動作。

更新方式

  1. kubectl set
  2. kubectl edit
  3. kubectl apply -f nginx-deployment.yaml

使用 --record 記錄更新信息

[root@master01 deployment]# kubectl set image deployment nginx-deployment nginx=nginx:1.16.1 --record
deployment.apps/nginx-deployment image updated
You have new mail in /var/spool/mail/root
[root@master01 deployment]# kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment 
REVISION  CHANGE-CAUSE
3         <none>
4         <none>
5         kubectl set image deployment nginx-deployment nginx=nginx:1.16.1 --record=true

[root@master01 deployment]# kubectl set image deployment nginx-deployment nginx=nginx:1.16.1
deployment.apps/nginx-deployment image updated

[root@master01 ~]# kubectl rollout status deployment nginx-deployment
Waiting for deployment "nginx-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination...
deployment "nginx-deployment" successfully rolled out
[root@master01 deployment]# kubectl edit deployment nginx-deployment 
Edit cancelled, no changes made.

[root@master01 deployment]# kubectl apply -f nginx-deployment.yaml 
deployment.apps/nginx-deployment configured

[root@master01 deployment]# kubectl get rs
NAME                          DESIRED   CURRENT   READY   AGE
nginx-deployment-559d658b74   0         0         0       4m17s
nginx-deployment-5787596d54   3         3         2       43s
nginx-deployment-66b6c48dd5   1         1         1       18m

[root@master01 ~]# kubectl rollout status deployment nginx-deployment
Waiting for deployment "nginx-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination...
deployment "nginx-deployment" successfully rolled out


[root@master01 deployment]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx                               1/1     Running   1          2d20h
nginx-deployment-5787596d54-bxjd7   1/1     Running   0          74s
nginx-deployment-5787596d54-qf8mv   1/1     Running   0          55s
nginx-deployment-5787596d54-sc5n9   1/1     Running   0          90s

特點

  1. Deployment 可確保在更新時僅關閉一定數量的 Pod。默認情況下,它確保至少所需 Pod 的 75% 處於運行狀態(最大不可用比例爲 25%)。
  2. Deployment 還確保僅所創建 Pod 數量只可能比期望 Pod 數高一點點。 默認情況下,它可確保啓動的 Pod 個數比期望個數最多多出 125%(最大峯值 25%)
  3. 仔細查看上述 Deployment ,將看到它首先創建了一個新的 Pod,然後刪除舊的 Pod, 並創建了新的 Pod。它不會殺死舊 Pod,直到有足夠數量的新 Pod 已經出現。 在足夠數量的舊 Pod 被殺死前並沒有創建新 Pod。它確保至少 3 個 Pod 可用, 同時最多總共 4 個 Pod 可用。 當 Deployment 設置爲 4 個副本時,Pod 的個數會介於 3 和 5 之間。

Deployment 的更多信息

[root@master01 deployment]# kubectl describe deployments
Name:                   nginx-deployment
Namespace:              default
CreationTimestamp:      Tue, 01 Nov 2022 21:55:08 +0800
Labels:                 app=nginx
Annotations:            deployment.kubernetes.io/revision: 4
Selector:               app=nginx
Replicas:                desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:        nginx:1.15.2
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nginx-deployment-5787596d54 (3/3 replicas created)
Events:
  Type    Reason             Age                    From                   Message
  ----    ------             ----                   ----                   -------
  Normal  ScalingReplicaSet  20m                    deployment-controller  Scaled up replica set nginx-deployment-66b6c48dd5 to 3
  Normal  ScalingReplicaSet  7m2s                   deployment-controller  Scaled up replica set nginx-deployment-559d658b74 to 1
  Normal  ScalingReplicaSet  6m43s                  deployment-controller  Scaled up replica set nginx-deployment-559d658b74 to 2
  Normal  ScalingReplicaSet  6m29s                  deployment-controller  Scaled down replica set nginx-deployment-66b6c48dd5 to 1
  Normal  ScalingReplicaSet  6m29s                  deployment-controller  Scaled up replica set nginx-deployment-559d658b74 to 3
  Normal  ScalingReplicaSet  4m6s                   deployment-controller  Scaled up replica set nginx-deployment-66b6c48dd5 to 1
  Normal  ScalingReplicaSet  4m4s                   deployment-controller  Scaled down replica set nginx-deployment-559d658b74 to 2
  Normal  ScalingReplicaSet  4m4s                   deployment-controller  Scaled up replica set nginx-deployment-66b6c48dd5 to 2
  Normal  ScalingReplicaSet  3m12s (x2 over 6m43s)  deployment-controller  Scaled down replica set nginx-deployment-66b6c48dd5 to 2
  Normal  ScalingReplicaSet  2m53s (x7 over 4m3s)   deployment-controller  (combined from similar events): Scaled up replica set nginx-deployment-5787596d54 to 3
  Normal  ScalingReplicaSet  2m41s (x2 over 6m15s)  deployment-controller  Scaled down replica set nginx-deployment-66b6c48dd5 to 0

Rollover

Deployment更新時會創建新的ReplicaSet以啓動新的Pod,最終新的 ReplicaSet 縮放爲 .spec.replicas 個副本, 所有舊 ReplicaSets 縮放爲 0 個副本。
但如果在更新過程中再次更新,則會立即停止本次更新,直接進行新的更新。

例如,假定你在創建一個 Deployment 以生成 nginx:1.14.2 的 5 個副本,但接下來 更新 Deployment 以創建 5 個 nginx:1.16.1 的副本,而此時只有 3 個 nginx:1.14.2 副本已創建。在這種情況下,Deployment 會立即開始殺死 3 個 nginx:1.14.2 Pod, 並開始創建 nginx:1.16.1 Pod。它不會等待 nginx:1.14.2 的 5 個副本都創建完成後纔開始執行變更動作

回滾 Deployment

當 Deployment 不穩定時(例如進入反覆崩潰狀態)。 默認情況下,Deployment 的所有上線記錄都保留在系統中,以便可以隨時回滾

history

[root@master01 deployment]# kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment 
REVISION  CHANGE-CAUSE
3         <none>
5         kubectl set image deployment nginx-deployment nginx=nginx:1.16.1 --record=true
6         kubectl set image deployment nginx-deployment nginx=nginx:1.15.1 --record=true
7         kubectl set image deployment nginx-deployment nginx=nginx:1.15.2 --record=true

CHANGE-CAUSE 的內容是從 Deployment 的 kubernetes.io/change-cause 註解複製過來的

[root@master01 deployment]# kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment 
REVISION  CHANGE-CAUSE
3         <none>
5         kubectl set image deployment nginx-deployment nginx=nginx:1.16.1 --record=true
6         kubectl set image deployment nginx-deployment nginx=nginx:1.15.1 --record=true
7         kubectl set image deployment nginx-deployment nginx=nginx:1.15.2 --record=true

# 通過以下方式設置 CHANGE-CAUSE 消息
[root@master01 deployment]# kubectl annotate deployment nginx-deployment kubernetes.io/change-cause="test annotate"
deployment.apps/nginx-deployment annotated
You have new mail in /var/spool/mail/root
[root@master01 deployment]# kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment 
REVISION  CHANGE-CAUSE
3         <none>
5         kubectl set image deployment nginx-deployment nginx=nginx:1.16.1 --record=true
6         kubectl set image deployment nginx-deployment nginx=nginx:1.15.1 --record=true
7         test annotate

要查看修訂歷史的詳細信息

[root@master01 deployment]# kubectl rollout history deployment nginx-deployment --revision=7
deployment.apps/nginx-deployment with revision #7
Pod Template:
  Labels:	app=nginx
	pod-template-hash=5787596d54
  Annotations:	kubernetes.io/change-cause: test annotate
  Containers:
   nginx:
    Image:	nginx:1.15.2
    Port:	80/TCP
    Host Port:	0/TCP
    Environment:	<none>
    Mounts:	<none>
  Volumes:	<none>

回滾到之前的修訂版本

# 撤消當前上線並回滾到以前的修訂版本
[root@master01 deployment]# kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment 
REVISION  CHANGE-CAUSE
3         <none>
5         kubectl set image deployment nginx-deployment nginx=nginx:1.16.1 --record=true
7         test annotate
8         kubectl set image deployment nginx-deployment nginx=nginx:1.15.1 --record=true

# 使用 --to-revision 來回滾到特定修訂版本
[root@master01 deployment]# kubectl rollout undo deployment nginx-deployment --to-revision=7
deployment.apps/nginx-deployment rolled back
[root@master01 deployment]# kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment 
REVISION  CHANGE-CAUSE
3         <none>
5         kubectl set image deployment nginx-deployment nginx=nginx:1.16.1 --record=true
8         kubectl set image deployment nginx-deployment nginx=nginx:1.15.1 --record=true
9         test annotate

# 檢查是否回滾成功 
[root@master01 deployment]# kubectl get deployment nginx-deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           40m

縮放 Deployment

[root@master01 deployment]# kubectl scale deployment nginx-deployment --replicas=4
deployment.apps/nginx-deployment scaled
You have new mail in /var/spool/mail/root
[root@master01 deployment]# kubectl get deployment nginx-deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   4/4     4            4           41m

[root@master01 deployment]# kubectl scale deployment nginx-deployment --replicas=1
deployment.apps/nginx-deployment scaled
[root@master01 deployment]# kubectl get deploy
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   1/1     1            1           43m


設置自動縮放器

假設集羣啓用了Pod 的水平自動縮放

kubectl autoscale deployment nginx-deployment --min=10 --max=15 --cpu-percent=80

比例縮放

RollingUpdate 的 Deployment 支持同時運行應用程序的多個版本。 當自動縮放器縮放處於上線進程(仍在進行中或暫停)中的 RollingUpdate Deployment 時, Deployment 控制器會平衡現有的活躍狀態的 ReplicaSets(含 Pod 的 ReplicaSets)中的額外副本, 以降低風險。這稱爲 比例縮放(Proportional Scaling)。

暫停、恢復 Deployment 的上線過程

  • kubectl rollout pause deployment nginx-deployment
  • kubectl rollout resume deployment nginx-deployment
[root@master01 deployment]# kubectl get rs -w
NAME                          DESIRED   CURRENT   READY   AGE
nginx-deployment-559d658b74   0         0         0       33m
nginx-deployment-5787596d54   5         5         5       29m
nginx-deployment-66b6c48dd5   0         0         0       47m
nginx-deployment-69cc985499   0         0         0       14m

[root@master01 ~]# kubectl rollout pause deployment nginx-deployment
deployment.apps/nginx-deployment paused


[root@master01 ~]# kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment 
REVISION  CHANGE-CAUSE
3         <none>
8         kubectl set image deployment nginx-deployment nginx=nginx:1.15.1 --record=true
11        test annotate
12        test annotate


[root@master01 ~]# kubectl rollout resume deployment nginx-deployment
deployment.apps/nginx-deployment resumed

^C[root@master01 deployment]# kubectl get rs -w
NAME                          DESIRED   CURRENT   READY   AGE
nginx-deployment-559d658b74   0         0         0       38m
nginx-deployment-5787596d54   5         5         5       34m
nginx-deployment-66b6c48dd5   0         0         0       51m
nginx-deployment-69cc985499   0         0         0       19m
nginx-deployment-559d658b74   0         0         0       38m
nginx-deployment-559d658b74   2         0         0       39m
nginx-deployment-5787596d54   4         5         5       35m
nginx-deployment-559d658b74   3         0         0       39m
nginx-deployment-5787596d54   4         5         5       35m
nginx-deployment-5787596d54   4         4         4       35m
nginx-deployment-559d658b74   3         0         0       39m
nginx-deployment-559d658b74   3         2         0       39m
nginx-deployment-559d658b74   3         3         0       39m
nginx-deployment-559d658b74   3         3         1       39m
nginx-deployment-5787596d54   3         4         4       35m
nginx-deployment-559d658b74   4         3         1       39m
nginx-deployment-5787596d54   3         4         4       35m
nginx-deployment-5787596d54   3         3         3       35m
nginx-deployment-559d658b74   4         3         1       39m
nginx-deployment-559d658b74   4         4         1       39m
nginx-deployment-559d658b74   4         4         2       39m
nginx-deployment-5787596d54   2         3         3       35m
nginx-deployment-559d658b74   5         4         2       39m
nginx-deployment-5787596d54   2         3         3       35m
nginx-deployment-559d658b74   5         4         2       39m
nginx-deployment-5787596d54   2         2         2       35m
nginx-deployment-559d658b74   5         5         2       39m
nginx-deployment-559d658b74   5         5         3       39m
nginx-deployment-5787596d54   1         2         2       35m
nginx-deployment-5787596d54   1         2         2       35m
nginx-deployment-5787596d54   1         1         1       35m
nginx-deployment-559d658b74   5         5         4       39m
nginx-deployment-5787596d54   0         1         1       36m
nginx-deployment-5787596d54   0         1         1       36m
nginx-deployment-5787596d54   0         0         0       36m
nginx-deployment-559d658b74   5         5         5       39m

Deployment 狀態

Deployment 的生命週期中會有許多狀態

進行中(Progressing)

執行下面的任務期間,Kubernetes 標記 Deployment 爲進行中

  • Deployment 創建新的 ReplicaSet
  • Deployment 正在爲其最新的 ReplicaSet 擴容
  • Deployment 正在爲其舊有的 ReplicaSet(s) 縮容
  • 新的 Pod 已經就緒或者可用(就緒至少持續了 MinReadySeconds 秒)

完成(Complete)

  • 與 Deployment 關聯的所有副本都已更新到指定的最新版本,之前請求的所有更新都已完成。
  • 與 Deployment 關聯的所有副本都可用。
  • 未運行 Deployment 的舊副本。

失敗(Failed)

造成此情況一些可能因素如

  • 配額(Quota)不足
  • 就緒探測(Readiness Probe)失敗
  • 鏡像拉取錯誤
  • 權限不足
  • 限制範圍(Limit Ranges)問題
  • 應用程序運行時的配置錯誤

檢測此狀況的一種方法是在 Deployment 規約中指定截止時間參數: (.spec.progressDeadlineSeconds)。

如果你暫停了某個 Deployment 上線,Kubernetes 不再根據指定的截止時間檢查 Deployment 進展。 你可以在上線過程中間安全地暫停 Deployment 再恢復其執行,這樣做不會導致超出最後時限的問題。

對失敗 Deployment 的操作

可應用於已完成的 Deployment 的所有操作也適用於失敗的 Deployment。 你可以對其執行擴縮容、回滾到以前的修訂版本等操作,或者在需要對 Deployment 的 Pod 模板應用多項調整時,將 Deployment 暫停。

清理策略

在 Deployment 中設置 .spec.revisionHistoryLimit 字段以指定保留此 Deployment 的多少箇舊有 ReplicaSet。其餘的 ReplicaSet 將在後臺被垃圾回收。 默認情況下,此值爲 10。

Canary Deployment(金絲雀部署)

參考鏈接

區分同一組件的不同版本或者不同配置的多個部署。
常見的做法是在 Pod 模板中區分鏡像標籤,保持新舊版本應用同時運行。 這樣,新版本在完全發佈之前也可以接收實時的生產流量。

一個供參考 Deployment 文件

[root@master01 ~]# kubectl get deploy nginx-deployment -oyaml
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "12"
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"nginx"},"name":"nginx-deployment","namespace":"default"},"spec":{"replicas":3,"selector":{"matchLabels":{"app":"nginx"}},"template":{"metadata":{"labels":{"app":"nginx"}},"spec":{"containers":[{"image":"nginx:1.15.2","name":"nginx","ports":[{"containerPort":80}]}]}}}}
    kubernetes.io/change-cause: test annotate
  creationTimestamp: "2022-11-01T13:55:08Z"
  generation: 18
  labels:
    app: nginx
  managedFields:
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
        f:labels:
          .: {}
          f:app: {}
      f:spec:
        f:progressDeadlineSeconds: {}
        f:replicas: {}
        f:revisionHistoryLimit: {}
        f:selector: {}
        f:strategy:
          f:rollingUpdate:
            .: {}
            f:maxSurge: {}
            f:maxUnavailable: {}
          f:type: {}
        f:template:
          f:metadata:
            f:labels:
              .: {}
              f:app: {}
          f:spec:
            f:containers:
              k:{"name":"nginx"}:
                .: {}
                f:imagePullPolicy: {}
                f:name: {}
                f:ports:
                  .: {}
                  k:{"containerPort":80,"protocol":"TCP"}:
                    .: {}
                    f:containerPort: {}
                    f:protocol: {}
                f:resources: {}
                f:terminationMessagePath: {}
                f:terminationMessagePolicy: {}
            f:dnsPolicy: {}
            f:restartPolicy: {}
            f:schedulerName: {}
            f:securityContext: {}
            f:terminationGracePeriodSeconds: {}
    manager: kubectl-client-side-apply
    operation: Update
    time: "2022-11-01T14:11:53Z"
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:kubernetes.io/change-cause: {}
    manager: kubectl
    operation: Update
    time: "2022-11-01T14:33:42Z"
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec:
        f:template:
          f:spec:
            f:containers:
              k:{"name":"nginx"}:
                f:image: {}
    manager: kubectl-set
    operation: Update
    time: "2022-11-01T14:47:28Z"
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:deployment.kubernetes.io/revision: {}
      f:status:
        f:availableReplicas: {}
        f:conditions:
          .: {}
          k:{"type":"Available"}:
            .: {}
            f:lastTransitionTime: {}
            f:lastUpdateTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
          k:{"type":"Progressing"}:
            .: {}
            f:lastTransitionTime: {}
            f:lastUpdateTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
        f:observedGeneration: {}
        f:readyReplicas: {}
        f:replicas: {}
        f:updatedReplicas: {}
    manager: kube-controller-manager
    operation: Update
    time: "2022-11-01T14:48:32Z"
  name: nginx-deployment
  namespace: default
  resourceVersion: "76861"
  uid: daec152b-f201-4e04-8cab-f70e5f1c2b13
spec:
  progressDeadlineSeconds: 600
  replicas: 5
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: nginx
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:1.16.1
        imagePullPolicy: IfNotPresent
        name: nginx
        ports:
        - containerPort: 80
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 5
  conditions:
  - lastTransitionTime: "2022-11-01T14:40:56Z"
    lastUpdateTime: "2022-11-01T14:40:56Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2022-11-01T14:48:29Z"
    lastUpdateTime: "2022-11-01T14:48:32Z"
    message: ReplicaSet "nginx-deployment-559d658b74" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 18
  readyReplicas: 5
  replicas: 5
  updatedReplicas: 5

參考鏈接

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章