Kubernetes 1.20.4 跨版本升级折腾记

Kubernetes 1.20.4已经发布,一个集群从1.20.2顺利升级,但另外一个集群以前版本有点老,进行跨版本升级时出现问题,后来全部重做,新安装时也出现问题,无法kubeadm init和kubeadm join,后来终于搞好了,一些过程记录如下:

证书问题

出现下面的情况:

(base) supermap@podc01:/etc$ sudo kubeadm join 10.1.1.202:6443 --token 4q3hdy.y7xjfjh0u1vqdx7k     --discovery-token-ca-cert-hash sha256:7eff3c734585308e0934c4af34a67edff0a98c5a3d9e99c24f1c5cdd09d3f519     --control-plane
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
error execution phase preflight: 
One or more conditions for hosting a new control plane instance is not satisfied.

failure loading certificate for CA: couldn't load the certificate file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory

Please ensure that:
* The cluster has a stable controlPlaneEndpoint address.
* The certificates that must be shared among control plane instances are provided.


To see the stack trace of this error execute with --v=5 or higher

后来发现是kubeadm init时少写了--upload-certs参数。加上后就不出现上面的错误信息了。

有人用这个方法解决,我没试过,感觉不行:

194  scp -rp /etc/kubernetes/pki/ca.* master02:/etc/kubernetes/pki
195  scp -rp /etc/kubernetes/pki/sa.* master02:/etc/kubernetes/pki
196  scp -rp /etc/kubernetes/pki/front-proxy-ca.* master02:/etc/kubernetes/pki
197  scp -rp /etc/kubernetes/pki/etcd/ca.* master02:/etc/kubernetes/pki/etcd
198  scp -rp /etc/kubernetes/admin.conf master02:/etc/kubernetes

master节点设置

把master node设为可以安装其它负载。如下:

kubectl taint nodes --all node-role.kubernetes.io/master-

CoreDNS问题

CoreDNS 出现问题,pod启动失败,如下:

supermap@podc02:~$ kubectl get pod -n kube-system
NAME                             READY   STATUS              RESTARTS   AGE
coredns-74ff55c5b-dtwdz          0/1     ContainerCreating   0          32m
coredns-74ff55c5b-jns5b          0/1     ContainerCreating   0          32m
etcd-podc02                      1/1     Running             0          32m
kube-apiserver-podc02            1/1     Running             0          32m
kube-controller-manager-podc02   1/1     Running             0          32m
kube-proxy-45jxl                 1/1     Running             0          32m
kube-scheduler-podc02            1/1     Running             0          32m

⚠️这个后来发现是网络驱动问题,重新安装flannel就可以了。

flannel安装

flannel的项目已经移到了flannel-io,原来的地址和raw.githubxxxx都访问不了,需要到新地址下载。

wget https://github.com/flannel-io/flannel/releases/download/v0.13.0/flannel-v0.13.0-linux-amd64.tar.gz

上面这个是可以的,也许换个网络又不行了。

访问github时不通,总是出现下面的错误。

fatal: 无法访问 'https://github.com/openthings/kubernetes-tools.git/':gnutls_handshake() failed: Error in the pull function.

后来莫名其妙的又好了。

有人说要安装这些软件,装了没用。

supermap@pods01:~/openthings$ sudo apt-get -y install build-essential nghttp2 libnghttp2-dev libssl-dev

⚠️更多的方法见后面。

systemd兼容性

用的docker 19.03,好久没有升级了。但是Ubuntu和systemd都在升级。

总是出现kubeadm init失败,把 /etc/docker/daemon.json 的systemd注释掉就成功了。

kubernetes不是推荐cgroupfs用systemd的么?也不知道是咋回事,下次把docker升级一下,再试试。

sudo kubeadm join 10.1.1.201:6443 --token k4l26p.d99xrvu2higwz9ow     --discovery-token-ca-cert-hash sha256:eda3e649672134c93d11bdb741672b3add5073eb3f4da021274dc51f9278d5f1      --control-plane --certificate-key 0a3656c05b225b35724851d08a52ab5ba8c0b70ea64fd4beeb5d727225b63ce4

如果token过期了,可以用下面的命令重新生成:

sudo kubeadm init phase upload-certs --upload-certs

CNI问题

出现下面的CNI错误信息:

3月 18 17:57:27 podc01 kubelet[312941]: E0318 17:57:27.777448  312941 kubelet.go:2184] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network pl>
3月 18 17:57:32 podc01 kubelet[312941]: W0318 17:57:32.184598  312941 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d

用料哼多方法都搞不定,从别的机器上拷贝10-flannel.conflist过来:

sudo scp [email protected]:~/10-flannel.conflist /etc/cni/net.d/ 

文件中就这些内容:

{
  "name": "cbr0",
  "cniVersion": "0.3.1",
  "plugins": [
    {
      "type": "flannel",
      "delegate": {
        "hairpinMode": true,
        "isDefaultGateway": true
      }
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": true
      }
    }
  ]
}

GnuTLS错误

网上搜到的方法,还没有试过:

Got reason of the problem, it was gnutls package. It's working weird behind a proxy. But openssl is working fine even in weak network. So workaround is that we should compile git with openssl. To do this, run the following commands:

sudo apt-get update
sudo apt-get install build-essential fakeroot dpkg-dev
sudo apt-get build-dep git
mkdir ~/git-openssl
cd ~/git-openssl
apt-get source git
dpkg-source -x git_1.7.9.5-1.dsc
cd git-1.7.9.5

(Remember to replace 1.7.9.5 with the actual version of git in your system.)

Then, edit debian/control file (run the command: gksu gedit debian/control) and replace all instances of libcurl4-gnutls-dev with libcurl4-openssl-dev.

Then build the package (if it's failing on test, you can remove the line TEST=test from the file debian/rules):

sudo apt-get install libcurl4-openssl-dev
sudo dpkg-buildpackage -rfakeroot -b

Install new package:

i386: sudo dpkg -i ../git_1.7.9.5-1_i386.deb

x86_64: sudo dpkg -i ../git_1.7.9.5-1_amd64.deb

Github访问故障

在系统中找到 hosts 文件:

Window:C:\Windows\System32\drivers\etc\hosts 或r Linux:/etc/hosts

放入以下两个 IP 地址:

# GitHub Start 
140.82.114.4 github.com
199.232.69.194 github.global.ssl.fastly.net
# GitHub End

存盘退出。

在 CMD 命令行中执行 ipconfig/flushdns,之后就能进入Github 网址。

访问这个地址 https://github.com.ipaddress.com/www.github.com 能够查到github的ip地址信息。

集群配置情况

最后还是把集群给恢复起来了:

(base) supermap@podc01:~$ kubectl get node -owide
NAME     STATUS   ROLES                  AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
podc01   Ready    control-plane,master   16h     v1.20.4   10.1.1.201    <none>        Ubuntu 20.10         5.8.0-45-generic   docker://20.10.5
podc02   Ready    control-plane,master   16h     v1.20.4   10.1.1.202    <none>        Ubuntu 20.04.2 LTS   5.4.0-67-generic   docker://19.3.8
podc04   Ready    control-plane,master   16h     v1.20.4   10.1.1.204    <none>        Ubuntu 20.04.2 LTS   5.4.0-67-generic   docker://19.3.8
pods01   Ready    control-plane,master   16h     v1.20.4   10.1.1.193    <none>        Ubuntu 20.04.2 LTS   5.4.0-67-generic   docker://19.3.8
pods02   Ready    control-plane,master   131m    v1.20.4   10.1.1.234    <none>        Ubuntu 20.04.2 LTS   5.4.0-67-generic   docker://19.3.8
pods03   Ready    control-plane,master   68m     v1.20.4   10.1.1.205    <none>        Ubuntu 20.04.2 LTS   5.4.0-67-generic   docker://19.3.8
pods04   Ready    control-plane,master   50m     v1.20.4   10.1.1.206    <none>        Ubuntu 20.04.2 LTS   5.4.0-67-generic   docker://19.3.8
pods05   Ready    control-plane,master   36m     v1.20.4   10.1.1.34     <none>        Ubuntu 20.04.2 LTS   5.4.0-66-generic   docker://19.3.8
pods06   Ready    control-plane,master   6m22s   v1.20.4   10.1.1.167    <none>        Ubuntu 20.04.2 LTS   5.4.0-66-generic   docker://19.3.8

三个节点出现其他异常:

  • 其中一个节点重启多次后能够正常更新;
  • 另一个节点podc03重启多次都不行估计挂了;
  • 还有一个节点出现文件系统只读,无法更新,后来修复了。
    • 启动时进菜单,选择修复。
    • 运行fsck,然后重启。

安装DashBoard

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章