kubeadm安裝K8s 1.16集羣--問題集錦

1、安裝完畢後無法獲取node信息

問題

[root@master k8s]# kubectl get nodes
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

解決

[root@master home]# mkdir -p $HOME/.kube
[root@master home]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master home]# chown $(id -u):$(id -g) $HOME/.kube/config

2、Node節點join的時候發生錯誤

問題

[root@node1 k8s]# kubeadm join 192.168.3.100:6443 --token safdsafsafsafd \
>     --discovery-token-ca-cert-hash sha256:safdsafsafsafdsafsafdsafdsafd
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR Port-10250]: Port 10250 is in use

解決

node上重啓kubeadm然後再join

[root@node1 k8s]# kubeadm reset

3、kubeadm init重新初始化的時候報錯

問題

[root@master k8s]# kubeadm init --kubernetes-version=v1.15.0 --pod-network-cidr=10.1.0.0/16 --apiserver-advertise-address=192.168.3.100
[init] Using Kubernetes version: v1.15.0
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty

解決

直接刪除/var/lib/etcd目錄

[root@master k8s]#rm -rf /var/lib/etcd

4、初始化的一個問題

問題

[root@localhost ~]# kubeadm init --kubernetes-version=v1.16.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.16.1
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR Port-10251]: Port 10251 is in use
        [ERROR Port-10252]: Port 10252 is in use
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
        [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
        [ERROR Port-10250]: Port 10250 is in use

解決

發現殺死進程都沒有用,最終重啓一下kubeadm就可以了

[root@localhost ~]# kubeadm reset

5、coredns一直是pending

問題

Master、node都安裝完畢,並且node join了master
在master發現coredns一直是pending

[root@master ~]# kubectl get pods --all-namespaces

kubeadm安裝K8s 1.16集羣--問題集錦

解決

檢查各節點狀態,發現master、node都是notready狀態

[root@master ~]# kubectl get nodes

kubeadm安裝K8s 1.16集羣--問題集錦
查看kubeletl日誌

[root@master ~]# journalctl -f -u kubelet.service
edVolume started for volume "cni" (UniqueName: "kubernetes.io/host-path/5accb47d-53bc-42d2-81c0-394bc9a2efee-cni") pod "kube-flannel-ds-amd64-g9ql2" (UID: "5accb47d-53bc-42d2-81c0-394bc9a2efee")
Dec 17 23:37:42 master kubelet[20121]: I1217 23:37:42.498480   20121 reconciler.go:154] Reconciler: start to sync state
Dec 17 23:37:43 master kubelet[20121]: W1217 23:37:43.504742   20121 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Dec 17 23:37:47 master kubelet[20121]: E1217 23:37:47.255729   20121 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 17 23:37:48 master kubelet[20121]: W1217 23:37:48.505560   20121 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d

報錯信息顯示網絡插件沒有準備好。
執行命令docker images|grep flannel來查看flannel鏡像是否已經成功拉取下來,發現flannel鏡像沒有拉下來。

[root@master ~]# docker images |grep flannel

kubeadm安裝K8s 1.16集羣--問題集錦
重新拉取flannel鏡像

[root@master ~]#docker pull quay.io/coreos/flannel:v0.11.0-amd64

如果官方鏡像無法下載。可以從阿里雲下載

[root@master ~]#docker pull registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/flannel:v0.11.0-amd64
[root@master ~]#docker tag registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
[root@master ~]#docker rmi registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/flannel:v0.11.0-amd64

重新執行如下命令,啓動flannel容器

[root@master home]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.yml

6、安裝完畢flannel 後coredns還是pending狀態

問題

查看kubelet日誌

[root@master ~]# journalctl -f -u kubelet.service
edVolume started for volume "cni" (UniqueName: "kubernetes.io/host-path/5accb47d-53bc-42d2-81c0-394bc9a2efee-cni") pod "kube-flannel-ds-amd64-g9ql2" (UID: "5accb47d-53bc-42d2-81c0-394bc9a2efee")
Dec 17 23:37:42 master kubelet[20121]: I1217 23:37:42.498480   20121 reconciler.go:154] Reconciler: start to sync state
Dec 17 23:37:43 master kubelet[20121]: W1217 23:37:43.504742   20121 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Dec 17 23:37:47 master kubelet[20121]: E1217 23:37:47.255729   20121 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 17 23:37:48 master kubelet[20121]: W1217 23:37:48.505560   20121 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d

解決

master重啓kubeadm再init

[root@master ~]# kubeadm reset
[root@master ~]#kubeadm init --kubernetes-version=v1.16.0 --pod-network-cidr=172.22.0.0/16 --apiserver-advertise-address=192.168.3.100

node也要reset重啓kubelet並要重新join

[root@node1 home]#kubeadm reset
[root@node1 home]#systemctl daemon-reload && systemctl restart kubelet
[root@node1 home]# kubeadm join 192.168.3.100:6443 --token vmuuvn.q7q14t5135zm9xk0 \
    --discovery-token-ca-cert-hash sha256:c302e2c93d2fe86be7f817534828224469a19c5ccbbf9b246f3695127c3ea611

問題解決

kubeadm安裝K8s 1.16集羣--問題集錦

7、kubernetes和docker版本兼容性問題

問題

[root@master ~]# journalctl -f -u kubelet.service
Dec 28 09:52:55 mlopsmaster kubelet[1842]: E1228 09:52:55.524231    1842 summary_sys_containers.go:47] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup

解決
所有節點都執行

[root@master kubelet.service.d]# pwd
/usr/lib/systemd/system/kubelet.service.d
[root@master kubelet.service.d]# ls
10-kubeadm.conf

編輯10-kubeadm.conf
新增: Environment=“KUBELET_MY_ARGS=–runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice”
修改ExecStart: 在末尾新增 $KUBELET_MY_ARGS
kubeadm安裝K8s 1.16集羣--問題集錦
保存之後,重啓kubelet服務即可

[root@master kubelet.service.d]#systemctl daemon-reload
[root@master kubelet.service.d]#systemctl restart kubelet

8、node節點無法查看pod狀態

問題

[root@node2 ~]# kubectl get pod -n kubeflow
The connection to the server localhost:8080 was refused - did you specify the right host or port?

解決

kubectl命令需要使用kubernetes-admin來運行。
將主節點中的【/etc/kubernetes/admin.conf】文件拷貝到子節點相同目錄下,然後配置環境變量

[root@node2 ~]#echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
立即生效
[root@node2 ~]#source ~/.bash_profile

9、coredns一直是ContainerCreating

問題

刪除k8s後,修改節點名稱並重裝,node join後
查看pod信息,發現coredns一直是ContainerCreating

![](https://s4.51cto.com/images/blog/202005/06/2a380759c6d43360ebe703e6be257fa5.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
[root@master k8s]# kubectl describe po coredns-5c98db65d4-crd5h -n kube-system
e = Unknown desc = failed to set up sandbox container "0888e2b293742a71b1dbd0e473fc7e4c2f697be372f9c27820c14d7bb94f4830" network for pod "coredns-5c98db65d4-crd5h": NetworkPlugin cni failed to set up pod "coredns-5c98db65d4-crd5h_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.1.0.1/24

解決

修改node名稱後的遺留症狀。所有節點執行如下

[root@master k8s]#rm -rf /var/lib/cni/flannel/* && rm -rf /var/lib/cni/networks/cbr0/* && ip link delete cni0
[root@master k8s]#rm -rf /var/lib/cni/networks/cni0/*

10、卸載K8S容器無法刪除

問題

kubeadm安裝K8s 1.16集羣--問題集錦

解決

Kubelet進程還在,殺掉kubelet進程,再刪除容器

[root@node1 ~]# kill -9 4523

kubeadm安裝K8s 1.16集羣--問題集錦

11、卸載後重新安裝,init失敗

問題

kubeadm安裝K8s 1.16集羣--問題集錦

解決

卸載後k8s相關進程還在,導致端口被佔用
殺掉相關進程再init

————————————————
版權聲明:本文爲CSDN博主「小魚快跑」的原創文章,遵循CC 4.0 BY-SA版權協議,轉載請附上原文出處鏈接及本聲明。
原文鏈接:https://blog.csdn.net/reachyu/article/details/105263983

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章