Kubernetes集羣實踐(08)升級集羣

ECK在集羣上還沒部署多久還沒有正式投入使用,試用階段也發現了不少問題,現在馬上要正式的部署了,想着在正式部署的時候把k8s集羣升級。看了些資料,由於我的集羣只有一個master節點雖然很強大,一臺HUAWEI Taishan2280v2 服務器配雙路鯤鵬920(單路48核,所以物盡其用,跑了master節點,traefik和kubernetes-dashboard,nexus沒有arm64的鏡像,很可惜),但終歸是個單點。因此,在沒有什麼負載的情況下把升級工作做了。
集羣升級分爲Kubernetes編排引擎升級和Docker容器運行引擎升級

Kubernetes升級

Kubernetes升級強烈建議參考官方文檔,看英文沒有問題,下面的可以忽略了,我這裏可以看作個人翻譯筆記。我的環境是離線升級,對應的rpm包和docker鏡像包提前準備好了,具體方法可以參看本系列的前面3個章節。
官方文檔地址:https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/

主節點升級

升級第1個控制節點(主節點)

1.升級kubeadm

# replace x in 1.17.x-0 with the latest patch version
sudo um install -y kubeadm-1.17.x-0 --disableexcludes=kubernetes

覈對kubeadm的版本

sudo kubeadm version

2.排空節點上的容器

# replace <cp-node-name> with the name of your control plane node
sudo kubectl drain <cp-node-name> --ignore-daemonsets

3.查看升級計劃

sudo kubeadm upgrade plan

執行後可以看見類似如下的結果,版本號根據實際操作有所區別:

[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.16.0
[upgrade/versions] kubeadm version: v1.17.0

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     1 x v1.16.0   v1.17.0

Upgrade to the latest version in the v1.16 series:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.16.0   v1.17.0
Controller Manager   v1.16.0   v1.17.0
Scheduler            v1.16.0   v1.17.0
Kube Proxy           v1.16.0   v1.17.0
CoreDNS              1.6.2     1.6.5
Etcd                 3.3.15    3.4.3-0

You can now apply the upgrade by executing the following command:

    kubeadm upgrade apply v1.17.0

上述命令只是檢查集羣是否能夠升級,以及可以升級的版本。

4.輸入實際的版本執行升級計劃

# replace x with the patch version you picked for this upgrade
sudo kubeadm upgrade apply v1.17.x

執行後可以在末尾看見

......
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.17.0". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

5.手動升級CNI provider plugin

如flannel,由於此次升級網絡插件沒有變化,因此沒有升級。

6.解封節點

# replace <cp-node-name> with the name of your control plane node
sudo kubectl uncordon <cp-node-name>

升級其它控制節點(主節點)

直接執行升級命令:

sudo kubeadm upgrade node

每個控制節點上升級kubelet和kubectl

# replace x in 1.17.x-0 with the latest patch version
sudo yum install -y kubelet-1.17.x-0 kubectl-1.17.x-0 --disableexcludes=kubernetes

重新加載kubelet並重啓

sudo systemctl daemon-reload
sudo systemctl restart kubelet

工作節點升級

1.升級kubeadm

# replace x in 1.17.x-0 with the latest patch version
sudo yum install -y kubeadm-1.17.x-0 --disableexcludes=kubernetes

2.排空節點上的容器

# replace <node-to-drain> with the name of your node you are draining
sudo kubectl drain <node-to-drain> --ignore-daemonsets

上述命令執行後可能出現如下類似輸出:

node/ip-172-31-85-18 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-dj7d7, kube-system/weave-net-z65qx
node/ip-172-31-85-18 drained

3.升級kubelet的配置

sudo kubeadm upgrade node

4.升級kubelet和kubectl

# replace x in 1.17.x-0 with the latest patch version
yum install -y kubelet-1.17.x-0 kubectl-1.17.x-0 --disableexcludes=kubernetes

重載並重啓kubectl

sudo systemctl daemon-reload
sudo systemctl restart kubelet

5.解封節點

# replace <node-to-drain> with the name of your node 
sudo kubectl uncordon <node-to-drain>

驗證升級結果

sudo kubectl get nodes
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章