kubeadm init失敗後

  在使用Kubeadm init主節點時未能成功,並輸出瞭如下的錯誤信息:

[root@master1 ~]# kubeadm init --config=/etc/kubeadm/init.default.yaml 
[init] Using Kubernetes version: v1.23.15
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
...省略部分信息...
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[kubelet-check] Initial timeout of 40s passed.
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: nodes "master1.localk8s" not found
To see the stack trace of this error execute with --v=5 or higher

  可以確定的是主機的/etc/hosts文件中正確配置了master1.localk8s,按以下命令一條一條執行後,重新初始化才成功:

[root@master1 ~]# kubeadm reset
[root@master1 ~]# systemctl daemon-reload
[root@master1 ~]# systemctl restart kubelet
[root@master1 ~]# iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
[root@master1 ~]# kubeadm init --config=/etc/kubeadm/init.default.yaml
[init] Using Kubernetes version: v1.23.15
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR Port-6443]: Port 6443 is in use
        [ERROR Port-10259]: Port 10259 is in use
        [ERROR Port-10257]: Port 10257 is in use
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
        [ERROR Port-10250]: Port 10250 is in use
        [ERROR Port-2379]: Port 2379 is in use
        [ERROR Port-2380]: Port 2380 is in use
        [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
----還是不成功(端口占用、上次初始化生成的文件沒刪除)----
----重啓一下,並刪除相應文件---- [root@master1
~]# reboot -h now
----等待重啓完成...---- [root@master1
~]# rm -rf /etc/kubernetes /var/lib/etcd [root@master1 ~]# kubeadm reset [root@master1 ~]# systemctl daemon-reload [root@master1 ~]# systemctl restart kubelet [root@master1 ~]# kubeadm init --config=/etc/kubeadm/init.default.yaml [init] Using Kubernetes version: v1.23.15 [preflight] Running pre-flight checks [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.17.3:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:9ee892538504ef34f9fe053be22011b2fdac2ad4ab634ee95fdef8f497f86279

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章