本文首發自個人博客:https://blog.smile13.com/articles/2019/01/14/1547445838441.html
安裝配置node節點(所有node節點同樣的操作)
1.拉取相應的鏡像
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.13.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.13.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.13.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.13.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.6
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.13.1 k8s.gcr.io/kube-apiserver:v1.13.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.13.1 k8s.gcr.io/kube-controller-manager:v1.13.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.13.1 k8s.gcr.io/kube-scheduler:v1.13.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.13.1 k8s.gcr.io/kube-proxy:v1.13.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6
2.執行加入語句,把node加入集羣
kubeadm join k8s-cluster.smile13.com:6443 --token mk3jfk.tducuowrll39qun8 --discovery-token-ca-cert-hash sha256:a66bb6ff4f065bfc7918c67832f56892071575af0c2039aff20a4fcf25244aaf
###忘記token或者token過期,需要重新生成token:
>1.在master上執行命令:kubeadm token create
>2.獲取ca證書`sha256`編碼hash值:openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
3.移除Node
##在master節點上執行:
kubectl drain k8s07 --delete-local-data --force --ignore-daemonsets
kubectl delete node k8s07
##在k8s07上
kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1 rm -rf /var/lib/cni/
##在其他節點上執行
kubectl delete node k8s07
4.kube-proxy開啓ipvs(任意一個master上操作)
##修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: “ipvs”
[root@k8s01 ~]# kubectl edit cm kube-proxy -n kube-system
##重啓所有的kube-proxy pod
[root@k8s01 ~]# kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
pod "kube-proxy-8r9bq" deleted
pod "kube-proxy-9k2zn" deleted
pod "kube-proxy-bv2bf" deleted
pod "kube-proxy-rkwg8" deleted
pod "kube-proxy-sq4lt" deleted
pod "kube-proxy-tvhkx" deleted
pod "kube-proxy-x6v57" deleted
##查看kube-proxy狀態
[root@k8s01 ~]# kubectl get pod -n kube-system | grep kube-proxy
kube-proxy-5r7fp 1/1 Running 0 31s
kube-proxy-895rz 1/1 Running 0 23s
kube-proxy-ggkrw 1/1 Running 0 19s
kube-proxy-gszff 1/1 Running 0 35s
kube-proxy-jl552 1/1 Running 0 60s
kube-proxy-n72bp 1/1 Running 0 83s
kube-proxy-pr7f9 1/1 Running 0 72s
[root@k8s01 ~]# kubectl logs kube-proxy-5r7fp -n kube-system
I0119 15:44:25.451787 1 server_others.go:189] Using ipvs Proxier.
W0119 15:44:25.452270 1 proxier.go:365] IPVS scheduler not specified, use rr by default
I0119 15:44:25.452458 1 server_others.go:216] Tearing down inactive rules.
I0119 15:44:25.503432 1 server.go:464] Version: v1.13.1
I0119 15:44:25.511221 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0119 15:44:25.511419 1 config.go:202] Starting service config controller
I0119 15:44:25.511433 1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0119 15:44:25.511457 1 config.go:102] Starting endpoints config controller
I0119 15:44:25.511518 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0119 15:44:25.611687 1 controller_utils.go:1034] Caches are synced for service config controller
I0119 15:44:25.611695 1 controller_utils.go:1034] Caches are synced for endpoints config controller
##日誌中打印出了Using ipvs Proxier,說明ipvs模式已經開啓。
版權聲明:本文爲博主原創文章,轉載請註明出處!