基於 KubeKey 擴容 Kubernetes v1.24 Worker 節點實戰

前言

知識點

  • 定級:入門級
  • KubeKey 擴容 Worker 節點
  • openEuler 操作系統的基本配置
  • Kubernets 基本命令

實戰服務器配置(架構 1:1 復刻小規模生產環境,配置略有不同)

主機名 IP CPU 內存 系統盤 數據盤 用途
ks-master-0 192.168.9.91 2 4 50 100 KubeSphere/k8s-master
ks-master-1 192.168.9.92 2 4 50 100 KubeSphere/k8s-master
ks-master-2 192.168.9.93 2 4 50 100 KubeSphere/k8s-master
ks-worker-0 192.168.9.95 2 4 50 100 k8s-worker/CI
ks-worker-1 192.168.9.96 2 4 50 100 k8s-worker
ks-worker-2 192.168.9.97 2 4 50 100 k8s-worker
storage-0 192.168.9.81 2 4 50 100+ ElasticSearch/GlusterFS/Ceph/Longhorn/NFS/
storage-1 192.168.9.82 2 4 50 100+ ElasticSearch/GlusterFS/Ceph/Longhorn
storage-2 192.168.9.83 2 4 50 100+ ElasticSearch/GlusterFS/Ceph/Longhorn
registry 192.168.9.80 2 4 50 200 Sonatype Nexus 3
合計 10 20 40 500 1100+

實戰環境涉及軟件版本信息

  • 操作系統:openEuler 22.03 LTS SP2 x86_64

  • KubeSphere:3.3.2

  • Kubernetes:v1.24.12

  • Containerd:1.6.4

  • KubeKey: v3.0.8

本文簡介

本文是 openEuler 22.03 LTS SP2 基於 KubeKey 擴容 Kubernetes Worker 節點實戰一文的更新版。

變更原因及改動說明如下:

  • 在後期的實戰訓練中發現 Kubernetes v1.26 版本過高導致原生不支持 GlusterFS 作爲後端存儲,最後支持的版本是 v1.25 系列。
  • KubeKey 有了更新,官方發佈了 v3.0.8 ,支持更多的 Kubernetes 版本。
  • 綜合考慮,我們選擇 Kubernetes v1.24.12KubeKey v3.0.8 更新我們的系列文檔。
  • 文檔整體結構稍微做了一些調整,但整體變化不大,只是細節略有差異。

上一期,我們實戰講解了使用 KubeSphere 開發的 KubeKey 工具自動化部署 3 Master 和 1 Worker 的 Kubernetes 集羣和 KubeSphere。

本期我們將模擬真實的生產環境演示如何使用 KubeKey 新增 Worker 節點到已有的 Kubernetes 集羣 。

操作系統基礎配置

新增加的 Worker 節點,操作系統基礎配置與初始化安裝部署時 Worker 節點的配置保持一致。

其他節點配置說明:

  • 所有節點都要更新 /etc/hosts 文件,在原有內容基礎上追加新增加的 Worker 節點的主機名和 IP 對應配置。
  • Master-0 節點上將 SSH 公鑰發送到新增加的 Worker 節點。

新增 Worker 節點配置

本文只選取 Worker-1 節點作爲演示,其餘新增 Worker 節點都按照相同的方式進行配置和設置。

  • 配置主機名
hostnamectl hostname ks-worker-1
  • 配置服務器時區

配置服務器時區爲 Asia/Shanghai

timedatectl set-timezone Asia/Shanghai

驗證服務器時區,正確配置如下。

[root@ks-worker-1 ~]# timedatectl
               Local time: Tue 2023-07-18 11:20:49 CST
           Universal time: Tue 2023-07-18 03:20:49 UTC
                 RTC time: Tue 2023-07-18 03:20:49
                Time zone: Asia/Shanghai (CST, +0800)
System clock synchronized: yes
              NTP service: active
          RTC in local TZ: no
  • 配置時間同步

安裝 chrony 作爲時間同步軟件。

yum install chrony

修改配置文件 /etc/chrony.conf,修改 ntp 服務器配置。

vi /etc/chrony.conf

# 刪除所有的 pool 配置
pool pool.ntp.org iburst

# 增加國內的 ntp 服務器,或是指定其他常用的時間服務器
pool cn.pool.ntp.org iburst

# 上面的手工操作,也可以使用 sed 自動替換
sed -i 's/^pool pool.*/pool cn.pool.ntp.org iburst/g' /etc/chrony.conf

重啓並設置 chrony 服務開機自啓動。

systemctl restart chronyd && systemctl enable chronyd

驗證 chrony 同步狀態。

# 執行查看命令
chronyc sourcestats -v

# 正常的輸出結果如下
[root@ks-worker-1 ~]# chronyc sourcestats -v
                             .- Number of sample points in measurement set.
                            /    .- Number of residual runs with same sign.
                           |    /    .- Length of measurement set (time).
                           |   |    /      .- Est. clock freq error (ppm).
                           |   |   |      /           .- Est. error in freq.
                           |   |   |     |           /         .- Est. offset.
                           |   |   |     |          |          |   On the -.
                           |   |   |     |          |          |   samples. \
                           |   |   |     |          |          |             |
Name/IP Address            NP  NR  Span  Frequency  Freq Skew  Offset  Std Dev
==============================================================================
ntp6.flashdance.cx          4   3     7   -503.672     127218    +23ms    15ms
time.cloudflare.com         4   3     7   +312.311  34651.250    +11ms  4357us
ntp8.flashdance.cx          4   4     8   +262.274  10897.487    -15ms  1976us
tick.ntp.infomaniak.ch      4   4     7  -2812.902  31647.234    -34ms  4359us
  • 配置 hosts 文件

編輯 /etc/hosts 文件,將規劃的服務器 IP 和主機名添加到文件中。

192.168.9.91    ks-master-0
192.168.9.92    ks-master-1
192.168.9.93    ks-master-2
192.168.9.95    ks-worker-0
192.168.9.96    ks-worker-1
192.168.9.97    ks-worker-2
  • 配置 DNS
 echo "nameserver 114.114.114.114" > /etc/resolv.conf
  • 關閉系統防火牆
systemctl stop firewalld && systemctl disable firewalld
  • 禁用 SELinux

openEuler 22.03 SP2 最小化安裝的系統默認啓用了 SELinux,爲了減少麻煩,我們所有的節點都禁用 SELinux。

# 使用 sed 修改配置文件,實現徹底的禁用
sed -i 's/^SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config

# 使用命令,實現臨時禁用,這一步其實不做也行,KubeKey 會自動配置
setenforce 0
  • 安裝系統依賴

在所有節點上,以 root 用戶登陸系統,執行下面的命令爲 Kubernetes 安裝系統基本依賴包。

# 安裝 Kubernetes 系統依賴包
yum install curl socat conntrack ebtables ipset ipvsadm

# 安裝其他必備包,openEuler 也是奇葩了,默認居然都不安裝tar,不裝的話後面會報錯
yum install tar

集羣所有已有節點新增配置

注意:本小節爲可選配置項,如果你安裝部署時沒有使用主機名,均使用 IP 模式時,可以忽略本節內容。

  • 配置 hosts 文件

編輯 /etc/hosts 文件,將新增的 Worker 節點 IP 和主機名條目更想到文件中。

192.168.9.91    ks-master-0
192.168.9.92    ks-master-1
192.168.9.93    ks-master-2
192.168.9.95    ks-worker-0
192.168.9.96    ks-worker-1
192.168.9.97    ks-worker-2

Master-0 節點新增配置

本小節爲可選配置項,如果你使用純密碼的方式作爲服務器遠程連接認證方式,可以忽略本節內容。

輸入以下命令將 SSH 公鑰從 master-0 節點發送到其他節點。命令執行時輸入 yes,以接受服務器的 SSH 指紋,然後在出現提示時輸入 root 用戶的密碼。

ssh-copy-id root@ks-worker-1
ssh-copy-id root@ks-worker-2

添加並上傳 SSH 公鑰後,您現在可以執行下面的命令驗證,通過 root 用戶連接到所有服務器,無需密碼驗證。

[root@ks-master-0 ~]# ssh root@ks-worker-1
# 登陸輸出結果 略

使用 KubeKey 擴容 Worker 節點

接下來我們使用 KubeKey 將新增加的節點加入到 Kubernetes 集羣,參考官方說明文檔,整個過程比較簡單,僅需兩步。

  • 修改 KubeKey 部署時使用的集羣配置文件
  • 執行增加節點的命令

修改集羣配置文件

通過 SSH 登陸到 master-0 節點,切換到原有的 KubeKey 目錄,修改原有的集羣配置文件,我們實戰中使用的名字爲 kubesphere-v3.3.2.yaml,請根據實際情況修改 。

主要修改點:

  • spec.hosts 部分:增加新的 Worker 節點的信息。
  • spec.roleGroups.worker 部分:增加新的 Worker 節點的信息

修改後的示例如下:

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: ks-master-0, address: 192.168.9.91, internalAddress: 192.168.9.91, user: root, password: "P@88w0rd"}
  - {name: ks-master-1, address: 192.168.9.92, internalAddress: 192.168.9.92, user: root, privateKeyPath: "~/.ssh/id_ed25519"}
  - {name: ks-master-2, address: 192.168.9.93, internalAddress: 192.168.9.93, user: root, privateKeyPath: "~/.ssh/id_ed25519"}
  - {name: ks-worker-0, address: 192.168.9.95, internalAddress: 192.168.9.95, user: root, privateKeyPath: "~/.ssh/id_ed25519"}
  - {name: ks-worker-1, address: 192.168.9.96, internalAddress: 192.168.9.96, user: root, privateKeyPath: "~/.ssh/id_ed25519"}
  - {name: ks-worker-2, address: 192.168.9.97, internalAddress: 192.168.9.97, user: root, privateKeyPath: "~/.ssh/id_ed25519"}
  roleGroups:
    etcd:
    - ks-master-0
    - ks-master-1
    - ks-master-2
    control-plane:
    - ks-master-0
    - ks-master-1
    - ks-master-2
    worker:
    - ks-worker-0
    - ks-worker-1
    - ks-worker-2
 ....
# 下面的內容保持不變

使用 KubeKey 增加節點

在增加節點之前,我們再確認一下當前集羣的節點信息。

[root@ks-master-0 kubekey]# kubectl get nodes -o wide
NAME          STATUS   ROLES           AGE    VERSION    INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                    KERNEL-VERSION                        CONTAINER-RUNTIME
ks-master-0   Ready    control-plane   130m   v1.24.12   192.168.9.91   <none>        openEuler 22.03 (LTS-SP2)   5.10.0-153.12.0.92.oe2203sp2.x86_64   containerd://1.6.4
ks-master-1   Ready    control-plane   130m   v1.24.12   192.168.9.92   <none>        openEuler 22.03 (LTS-SP2)   5.10.0-153.12.0.92.oe2203sp2.x86_64   containerd://1.6.4
ks-master-2   Ready    control-plane   130m   v1.24.12   192.168.9.93   <none>        openEuler 22.03 (LTS-SP2)   5.10.0-153.12.0.92.oe2203sp2.x86_64   containerd://1.6.4
ks-worker-0   Ready    worker          130m   v1.24.12   192.168.9.95   <none>        openEuler 22.03 (LTS-SP2)   5.10.0-153.12.0.92.oe2203sp2.x86_64   containerd://1.6.4

接下來我們執行下面的命令,使用修改後的配置文件將新增的 Worker 節點加入集羣。

export KKZONE=cn
./kk add nodes -f kubesphere-v3.3.2.yaml

注意:export KKZONE=cn 一定要先執行,否則會去 DockerHub 上拉取鏡像。

上面的命令執行後,首先 kk 會檢查部署 Kubernetes 的依賴及其他詳細要求。檢查合格後,系統將提示您確認安裝。輸入 yes 並按 ENTER 繼續部署。

[root@ks-master-0 kubekey]# ./kk add nodes -f kubesphere-v3.3.2.yaml


 _   __      _          _   __
| | / /     | |        | | / /
| |/ / _   _| |__   ___| |/ /  ___ _   _
|    \| | | | '_ \ / _ \    \ / _ \ | | |
| |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
                                    __/ |
                                   |___/

13:35:51 CST [GreetingsModule] Greetings
13:35:51 CST message: [ks-worker-2]
Greetings, KubeKey!
13:35:52 CST message: [ks-master-2]
Greetings, KubeKey!
13:35:52 CST message: [ks-master-0]
Greetings, KubeKey!
13:35:52 CST message: [ks-master-1]
Greetings, KubeKey!
13:35:53 CST message: [ks-worker-0]
Greetings, KubeKey!
13:35:53 CST message: [ks-worker-1]
Greetings, KubeKey!
13:35:53 CST success: [ks-worker-2]
13:35:53 CST success: [ks-master-2]
13:35:53 CST success: [ks-master-0]
13:35:53 CST success: [ks-master-1]
13:35:53 CST success: [ks-worker-0]
13:35:53 CST success: [ks-worker-1]
13:35:53 CST [NodePreCheckModule] A pre-check on nodes
13:35:57 CST success: [ks-worker-1]
13:35:57 CST success: [ks-worker-2]
13:35:57 CST success: [ks-master-2]
13:35:57 CST success: [ks-master-1]
13:35:57 CST success: [ks-master-0]
13:35:57 CST success: [ks-worker-0]
13:35:57 CST [ConfirmModule] Display confirmation form
+-------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| name        | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time         |
+-------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| ks-master-0 | y    | y    | y       | y        | y     | y     | y       | y         | y      |        | v1.6.4     |            |             |                  | CST 13:35:56 |
| ks-master-1 | y    | y    | y       | y        | y     | y     | y       | y         | y      |        | v1.6.4     |            |             |                  | CST 13:35:56 |
| ks-master-2 | y    | y    | y       | y        | y     | y     | y       | y         | y      |        | v1.6.4     |            |             |                  | CST 13:35:56 |
| ks-worker-0 | y    | y    | y       | y        | y     | y     | y       | y         | y      |        | v1.6.4     |            |             |                  | CST 13:35:57 |
| ks-worker-1 | y    | y    | y       | y        | y     | y     | y       | y         | y      |        |            |            |             |                  | CST 13:35:52 |
| ks-worker-2 | y    | y    | y       | y        | y     | y     | y       | y         | y      |        |            |            |             |                  | CST 13:35:53 |
+-------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]:

安裝過程日誌輸出比較多,爲了節省篇幅這裏就不展示了。

部署完成需要大約 15 分鐘左右,具體看網速和機器配置,本次部署完成耗時 7 分鐘。

部署完成後,您應該會在終端上看到類似於下面的輸出。提示部署完成的同時,輸出中還會顯示用戶登陸 KubeSphere 的默認管理員用戶和密碼。

...
13:41:39 CST [AutoRenewCertsModule] Generate k8s certs renew script
13:41:40 CST success: [ks-master-1]
13:41:40 CST success: [ks-master-0]
13:41:40 CST success: [ks-master-2]
13:41:40 CST [AutoRenewCertsModule] Generate k8s certs renew service
13:41:42 CST success: [ks-master-1]
13:41:42 CST success: [ks-master-2]
13:41:42 CST success: [ks-master-0]
13:41:42 CST [AutoRenewCertsModule] Generate k8s certs renew timer
13:41:43 CST success: [ks-master-1]
13:41:43 CST success: [ks-master-0]
13:41:43 CST success: [ks-master-2]
13:41:43 CST [AutoRenewCertsModule] Enable k8s certs renew service
13:41:44 CST success: [ks-master-0]
13:41:44 CST success: [ks-master-1]
13:41:44 CST success: [ks-master-2]
13:41:44 CST Pipeline[AddNodesPipeline] execute successfully

擴容後集羣狀態驗證

KubeSphere 管理控制檯驗證集羣狀態

我們打開瀏覽器訪問 master-0 節點的 IP 地址和端口 30880,登陸 KubeSphere 管理控制檯的登錄頁面。

進入集羣管理界面,單擊左側「節點」菜單,點擊「集羣節點」查看 Kubernetes 集羣可用節點的詳細信息。

還記得上一期,初始部署集羣時只有一個 Worker 節點,「系統組件」中監控組件處於異常狀態。加入新的 Worker 節點後,我們驗證一下監控組件是否自動恢復正常。

單擊左側「系統組件」菜單,查看已安裝組件的詳細信息。重點查看監控類別的組件狀態,在圖中可以看到監控的 10 個組件都正常。

Kubectl 命令行驗證集羣狀態

  • 查看集羣節點信息

在 master-0 節點運行 kubectl 命令獲取 Kubernetes 集羣上的可用節點列表。

kubectl get nodes -o wide

在輸出結果中可以看到,當前的 Kubernetes 集羣有三個可用節點、節點的內部 IP、節點角色、節點的 Kubernetes 版本號、容器運行時及版本號、操作系統類型及內核版本等信息。

[root@ks-master-0 kubekey]# kubectl get nodes -o wide
NAME          STATUS   ROLES           AGE    VERSION    INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                    KERNEL-VERSION                        CONTAINER-RUNTIME
ks-master-0   Ready    control-plane   149m   v1.24.12   192.168.9.91   <none>        openEuler 22.03 (LTS-SP2)   5.10.0-153.12.0.92.oe2203sp2.x86_64   containerd://1.6.4
ks-master-1   Ready    control-plane   148m   v1.24.12   192.168.9.92   <none>        openEuler 22.03 (LTS-SP2)   5.10.0-153.12.0.92.oe2203sp2.x86_64   containerd://1.6.4
ks-master-2   Ready    control-plane   148m   v1.24.12   192.168.9.93   <none>        openEuler 22.03 (LTS-SP2)   5.10.0-153.12.0.92.oe2203sp2.x86_64   containerd://1.6.4
ks-worker-0   Ready    worker          148m   v1.24.12   192.168.9.95   <none>        openEuler 22.03 (LTS-SP2)   5.10.0-153.12.0.92.oe2203sp2.x86_64   containerd://1.6.4
ks-worker-1   Ready    worker          11m    v1.24.12   192.168.9.96   <none>        openEuler 22.03 (LTS-SP2)   5.10.0-153.12.0.92.oe2203sp2.x86_64   containerd://1.6.4
ks-worker-2   Ready    worker          11m    v1.24.12   192.168.9.97   <none>        openEuler 22.03 (LTS-SP2)   5.10.0-153.12.0.92.oe2203sp2.x86_64   containerd://1.6.4
  • 查看 Pod 列表

輸入以下命令獲取在 Kubernetes 集羣上運行的 Pod 列表,按工作負載在 NODE 上的分佈排序。

kubectl get pods -o wide -A | sort -k 8

在輸出結果中可以看到, 新增的兩個 Worker 節點上已經運行了 5 個必須的基本組件。除此之外,在 worker-1 上,還成功自啓動了上一期在 worker-0 啓動失敗的 prometheus-k8s-1

[root@ks-master-0 kubekey]# kubectl get pods -o wide -A | sort -k 8
NAMESPACE                      NAME                                               READY   STATUS    RESTARTS      AGE    IP              NODE          NOMINATED NODE   READINESS GATES
kube-system                    kube-scheduler-ks-master-0                         1/1     Running   1 (43m ago)   149m   192.168.9.91    ks-master-0   <none>           <none>
kubesphere-monitoring-system   node-exporter-t9vrm                                2/2     Running   0             142m   192.168.9.91    ks-master-0   <none>           <none>
kube-system                    calico-node-kx4fz                                  1/1     Running   0             148m   192.168.9.91    ks-master-0   <none>           <none>
kube-system                    kube-apiserver-ks-master-0                         1/1     Running   0             149m   192.168.9.91    ks-master-0   <none>           <none>
kube-system                    kube-controller-manager-ks-master-0                1/1     Running   0             149m   192.168.9.91    ks-master-0   <none>           <none>
kube-system                    kube-proxy-sk4hz                                   1/1     Running   0             148m   192.168.9.91    ks-master-0   <none>           <none>
kube-system                    nodelocaldns-h4vmx                                 1/1     Running   0             149m   192.168.9.91    ks-master-0   <none>           <none>
kubesphere-monitoring-system   node-exporter-b57bp                                2/2     Running   0             142m   192.168.9.92    ks-master-1   <none>           <none>
kube-system                    calico-node-qx5qk                                  1/1     Running   0             148m   192.168.9.92    ks-master-1   <none>           <none>
kube-system                    coredns-f657fccfd-8lnd5                            1/1     Running   0             149m   10.233.103.2    ks-master-1   <none>           <none>
kube-system                    coredns-f657fccfd-vtlmx                            1/1     Running   0             149m   10.233.103.1    ks-master-1   <none>           <none>
kube-system                    kube-apiserver-ks-master-1                         1/1     Running   0             148m   192.168.9.92    ks-master-1   <none>           <none>
kube-system                    kube-controller-manager-ks-master-1                1/1     Running   0             148m   192.168.9.92    ks-master-1   <none>           <none>
kube-system                    kube-proxy-728cs                                   1/1     Running   0             148m   192.168.9.92    ks-master-1   <none>           <none>
kube-system                    kube-scheduler-ks-master-1                         1/1     Running   0             148m   192.168.9.92    ks-master-1   <none>           <none>
kube-system                    nodelocaldns-5594x                                 1/1     Running   0             148m   192.168.9.92    ks-master-1   <none>           <none>
kubesphere-monitoring-system   node-exporter-vm9cq                                2/2     Running   0             142m   192.168.9.93    ks-master-2   <none>           <none>
kube-system                    calico-node-rb2cf                                  1/1     Running   0             148m   192.168.9.93    ks-master-2   <none>           <none>
kube-system                    kube-apiserver-ks-master-2                         1/1     Running   0             148m   192.168.9.93    ks-master-2   <none>           <none>
kube-system                    kube-controller-manager-ks-master-2                1/1     Running   0             148m   192.168.9.93    ks-master-2   <none>           <none>
kube-system                    kube-proxy-ndc62                                   1/1     Running   0             148m   192.168.9.93    ks-master-2   <none>           <none>
kube-system                    kube-scheduler-ks-master-2                         1/1     Running   0             148m   192.168.9.93    ks-master-2   <none>           <none>
kube-system                    nodelocaldns-gnbg6                                 1/1     Running   0             148m   192.168.9.93    ks-master-2   <none>           <none>
kubesphere-controls-system     default-http-backend-587748d6b4-57zck              1/1     Running   0             144m   10.233.115.6    ks-worker-0   <none>           <none>
kubesphere-monitoring-system   alertmanager-main-0                                2/2     Running   0             142m   10.233.115.9    ks-worker-0   <none>           <none>
kubesphere-monitoring-system   kube-state-metrics-5b8dc5c5c6-9ng42                3/3     Running   0             142m   10.233.115.8    ks-worker-0   <none>           <none>
kubesphere-monitoring-system   node-exporter-79c6m                                2/2     Running   0             142m   192.168.9.95    ks-worker-0   <none>           <none>
kubesphere-monitoring-system   prometheus-operator-66d997dccf-zfdf5               2/2     Running   0             142m   10.233.115.7    ks-worker-0   <none>           <none>
kubesphere-system              ks-console-7f88c4fd8d-b4wdr                        1/1     Running   0             144m   10.233.115.5    ks-worker-0   <none>           <none>
kubesphere-system              ks-installer-559fc4b544-pcdrn                      1/1     Running   0             148m   10.233.115.3    ks-worker-0   <none>           <none>
kube-system                    calico-kube-controllers-f9f9bbcc9-9x49n            1/1     Running   0             148m   10.233.115.2    ks-worker-0   <none>           <none>
kube-system                    calico-node-kvfbg                                  1/1     Running   0             148m   192.168.9.95    ks-worker-0   <none>           <none>
kube-system                    kube-proxy-qdmkb                                   1/1     Running   0             148m   192.168.9.95    ks-worker-0   <none>           <none>
kube-system                    nodelocaldns-d572z                                 1/1     Running   0             148m   192.168.9.95    ks-worker-0   <none>           <none>
kube-system                    snapshot-controller-0                              1/1     Running   0             146m   10.233.115.4    ks-worker-0   <none>           <none>
kubesphere-controls-system     kubectl-admin-5d588c455b-7bw75                     1/1     Running   0             139m   10.233.115.19   ks-worker-0   <none>           <none>
kubesphere-monitoring-system   alertmanager-main-1                                2/2     Running   0             142m   10.233.115.10   ks-worker-0   <none>           <none>
kubesphere-monitoring-system   alertmanager-main-2                                2/2     Running   0             142m   10.233.115.11   ks-worker-0   <none>           <none>
kubesphere-monitoring-system   notification-manager-deployment-6f8c66ff88-mqmxx   2/2     Running   0             140m   10.233.115.16   ks-worker-0   <none>           <none>
kubesphere-monitoring-system   notification-manager-deployment-6f8c66ff88-pjm79   2/2     Running   0             140m   10.233.115.15   ks-worker-0   <none>           <none>
kubesphere-monitoring-system   notification-manager-operator-6455b45546-kgdpf     2/2     Running   0             141m   10.233.115.13   ks-worker-0   <none>           <none>
kubesphere-monitoring-system   prometheus-k8s-0                                   2/2     Running   0             142m   10.233.115.14   ks-worker-0   <none>           <none>
kubesphere-system              ks-apiserver-7ddfccbb94-kd7tg                      1/1     Running   0             144m   10.233.115.18   ks-worker-0   <none>           <none>
kubesphere-system              ks-controller-manager-6cd89786dc-4xnhq             1/1     Running   1 (43m ago)   144m   10.233.115.17   ks-worker-0   <none>           <none>
kube-system                    openebs-localpv-provisioner-7497b4c996-ngnv9       1/1     Running   1 (43m ago)   148m   10.233.115.1    ks-worker-0   <none>           <none>
kube-system                    haproxy-ks-worker-0                                1/1     Running   1 (10m ago)   148m   192.168.9.95    ks-worker-0   <none>           <none>

kubesphere-monitoring-system   node-exporter-2jntq                                2/2     Running   0             11m    192.168.9.96    ks-worker-1   <none>           <none>
kubesphere-monitoring-system   prometheus-k8s-1                                   2/2     Running   0             81m    10.233.120.2    ks-worker-1   <none>           <none>
kube-system                    calico-node-4bwx9                                  1/1     Running   0             11m    192.168.9.96    ks-worker-1   <none>           <none>
kube-system                    haproxy-ks-worker-1                                1/1     Running   0             11m    192.168.9.96    ks-worker-1   <none>           <none>
kube-system                    kube-proxy-tgn54                                   1/1     Running   0             11m    192.168.9.96    ks-worker-1   <none>           <none>
kube-system                    nodelocaldns-mmcpk                                 1/1     Running   0             11m    192.168.9.96    ks-worker-1   <none>           <none>

kubesphere-monitoring-system   node-exporter-hslhs                                2/2     Running   0             11m    192.168.9.97    ks-worker-2   <none>           <none>
kube-system                    calico-node-27jxb                                  1/1     Running   0             11m    192.168.9.97    ks-worker-2   <none>           <none>
kube-system                    haproxy-ks-worker-2                                1/1     Running   0             11m    192.168.9.97    ks-worker-2   <none>           <none>
kube-system                    kube-proxy-qjhq2                                   1/1     Running   0             11m    192.168.9.97    ks-worker-2   <none>           <none>
kube-system                    nodelocaldns-2ttp8                                 1/1     Running   0             11m    192.168.9.97    ks-worker-2   <none>           <none>
  • 查看 Image 列表

輸入以下命令查看在 Worker 節點上已經下載的 Image 列表。

crictl images ls

在新增的 Worker 節點執行,輸出結果如下:

# Worker-1
[root@ks-worker-1 ~]# crictl images ls
IMAGE                                                                      TAG                 IMAGE ID            SIZE
registry.cn-beijing.aliyuncs.com/kubesphereio/cni                          v3.23.2             a87d3f6f1b8fd       111MB
registry.cn-beijing.aliyuncs.com/kubesphereio/coredns                      1.8.6               a4ca41631cc7a       13.6MB
registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy                      2.3                 0ea9253dad7c0       38.5MB
registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache           1.15.12             5340ba194ec91       42.1MB
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers             v3.23.2             ec95788d0f725       56.4MB
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy                   v1.24.12            562ccc25ea629       39.6MB
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy              v0.11.0             29589495df8d9       19.2MB
registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils                  3.3.0               e88cfb3a763b9       26.9MB
registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter                v1.3.1              1dbe0e9319764       10.3MB
registry.cn-beijing.aliyuncs.com/kubesphereio/node                         v3.23.2             a3447b26d32c7       77.8MB
registry.cn-beijing.aliyuncs.com/kubesphereio/pause                        3.7                 221177c6082a8       311kB
registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol           v3.23.2             b21e2d7408a79       8.67MB
registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader   v0.55.1             7c63de88523a9       4.84MB
registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus                   v2.34.0             e3cf894a63f55       78.1MB

# Worker-2
[root@ks-worker-2 ~]# crictl images ls
IMAGE                                                              TAG                 IMAGE ID            SIZE
registry.cn-beijing.aliyuncs.com/kubesphereio/cni                  v3.23.2             a87d3f6f1b8fd       111MB
registry.cn-beijing.aliyuncs.com/kubesphereio/coredns              1.8.6               a4ca41631cc7a       13.6MB
registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy              2.3                 0ea9253dad7c0       38.5MB
registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache   1.15.12             5340ba194ec91       42.1MB
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers     v3.23.2             ec95788d0f725       56.4MB
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy           v1.24.12            562ccc25ea629       39.6MB
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy      v0.11.0             29589495df8d9       19.2MB
registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter        v1.3.1              1dbe0e9319764       10.3MB
registry.cn-beijing.aliyuncs.com/kubesphereio/node                 v3.23.2             a3447b26d32c7       77.8MB
registry.cn-beijing.aliyuncs.com/kubesphereio/pause                3.7                 221177c6082a8       311kB
registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol   v3.23.2             b21e2d7408a79       8.67MB

注意:Worker-1 節點的 Image 初始數量爲 14 個,Worker-2 節點的 Image 初始數量爲 11 個。

至此,我們完成了在已有三個 Master 節點和一個 Worker 節點的 Kubernetes 集羣中增加 2 個 Worker 節點的全部任務。

結束語

本文主要實戰演示了在利用 KubeKey 自動化增加 Worker 節點到已有 Kubernetes 集羣的詳細過程。

本文的操作雖然是基於 openEuler 22.03 LTS SP2,但是整個操作流程同樣適用於其他操作系統利用 KubeKey 部署的 Kubernetes 集羣的擴容。

本文由博客一文多發平臺 OpenWrite 發佈!

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章