centos7.6使用kubeadm安裝kubernetes的master worker節點筆記及遇到的坑

個人博客原文地址:http://www.lampnick.com/php/760

本文目標

安裝docker及設置docker代理
安裝kubeadm
使用kubeadm初始化k8s Master節點
安裝網絡插件weave-kube
部署 Kubernetes 的 Worker 節點
部署kubernetes-dashboard
監控組件 – prometheus-operator 部署(https://github.com/coreos/prometheus-operator)

環境

mac virtual box
虛擬機配置:
cpu:2c
memory:1G
disk:8G

前置條件

linux下科學拉取代碼請參考 :https://blog.liuguofeng.com/p/4010

1.配置ss自啓動

[root@centos7vm ~]#  vim /etc/systemd/system/shadowsocks.service ,內容如下:

[Unit]
Description=Shadowsocks
[Service]
TimeoutStartSec=0
ExecStart=/usr/bin/sslocal -c /etc/shadowsocks.json 
[Install]
WantedBy=multi-user.target

然後shell中執行如下命令
[root@centos7vm ~]#  systemctl enable shadowsocks.service
[root@centos7vm ~]#  systemctl start shadowsocks.service
[root@centos7vm ~]#  systemctl status shadowsocks.service

2.關閉SELinux及防火牆(不然會有權限問題)

[root@centos7vm ~]# sestatus
SELinux status: enabled
於是關閉selinux(setenforce 0 在我的centos7.6上不起作用)
[root@centos7vm ~]# vim /etc/selinux/config
將SELINUX=enforcing改爲SELINUX=disabled 
設置後需要重啓才能生效
[root@centos7vm ~]# sestatus
SELinux status: disabled

[root@centos7vm ~]# systemctl stop firewalld
[root@centos7vm ~]# systemctl disable firewalld

 3.安裝docker及設置docker代理

[root@centos7vm ~]#yum -y install docker
[root@centos7vm ~]#  mkdir -p /etc/systemd/system/docker.service.d
[root@centos7vm ~]#  vim /etc/systemd/system/docker.service.d/http-proxy.conf
加入
[Service]
Environment="HTTP_PROXY=http://127.0.0.1:8118" "NO_PROXY=localhost,172.16.0.0/16,127.0.0.1,10.244.0.0/16"
#vim /etc/systemd/system/docker.service.d/https-proxy.conf
加入
[Service]
Environment="HTTPS_PROXY=http://127.0.0.1:8118" "NO_PROXY=localhost,172.16.0.0/16,127.0.0.1,10.244.0.0/16"
重新加載
[root@centos7vm ~]#  systemctl daemon-reload && systemctl restart docker

4.安裝kubeadm

[root@centos7vm ~]#  yum -y install kubeadm   
配置kubeadm的yum源
[root@centos7vm ~]#  vim /etc/yum.repos.d/kubernetes.repo
加入
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
安裝kubadm
[root@centos7vm ~]# yum install -y kubeadm
這一步會自動安裝kubectl,kubecni,kubelet
查看版本
[root@centos7vm ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:08:49Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
[root@centos7vm ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
[root@centos7vm ~]# kubelet --version
Kubernetes v1.14.1
設置kubelet開機自啓動並啓動
[root@centos7vm ~]#  systemctl enable kubelet.service && systemctl start kubelet

5.使用kubeadm初始化k8s Master節點

[root@centos7vm ~]# kubeadm init
I0424 05:44:25.902645 4348 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0424 05:44:25.902789 4348 version.go:97] falling back to the local client version: v1.14.1
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [centos7vm kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.0.222]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [centos7vm localhost] and IPs [172.16.0.222 127.0.0.1 ::1]
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [centos7vm localhost] and IPs [172.16.0.222 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.503495 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node centos7vm as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node centos7vm as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: smz672.4uxlpw056eykpqi3
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
 https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.0.222:6443 --token smz672.4uxlpw056eykpqi3 \
 --discovery-token-ca-cert-hash sha256:8c0b46999ce4cb50f9add92c7c6b28b15fdfeec49c9b09e605e227e667bc0e6b 
出現上面的說明初始化成功了,如上的kubeadm join 命令,就是用來給這個 Master 節點添加更多工作節點(Worker)的命令。後面部署 Worker 節點的時候馬上會用到它,所以找一個地方把這條命令記錄下來。

6.按kubeadm安裝成功的提示執行如下配置命令

需要這些配置命令的原因是:Kubernetes 集羣默認需要加密方式訪問。所以,這幾條命令,就是 將剛剛部署生成的 Kubernetes 集羣的安全配置文件,保存到當前用戶的.kube 目錄下,kubectl 默 認會使用這個目錄下的授權信息訪問 Kubernetes 集羣。
如果不這麼做的話,我們每次都需要通過 export KUBECONFIG 環境變量告訴 kubectl 這個安全配 置文件的位置。
[root@centos7vm ~]# mkdir -p $HOME/.kube
[root@centos7vm ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@centos7vm ~]# chown $(id -u):$(id -g) $HOME/.kube/config

[root@centos7vm ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
centos7vm NotReady master 28m v1.14.1
如果沒有執行上面的命令,在執行kubectl get nodes會報如下的錯誤
[root@centos7vm ~]# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?

7.調試

可以看到,這個 get 指令輸出的結果裏,centos7vm節點的狀態是 NotReady,這是爲什麼呢?
在調試 Kubernetes 集羣時,最重要的手段就是用 kubectl describe 來查看這個節點(Node)對 象的詳細信息、狀態和事件(Event),我們來試一下:
[root@centos7vm ~]# kubectl describe node centos7vm
.....

Conditions:
 Type Status LastHeartbeatTime LastTransitionTime Reason Message
 ---- ------ ----------------- ------------------ ------ -------
 MemoryPressure False Wed, 24 Apr 2019 06:40:46 -0400 Wed, 24 Apr 2019 05:44:38 -0400 KubeletHasSufficientMemory kubelet has sufficient memory available
 DiskPressure False Wed, 24 Apr 2019 06:40:46 -0400 Wed, 24 Apr 2019 05:44:38 -0400 KubeletHasNoDiskPressure kubelet has no disk pressure
 PIDPressure False Wed, 24 Apr 2019 06:40:46 -0400 Wed, 24 Apr 2019 05:44:38 -0400 KubeletHasSufficientPID kubelet has sufficient PID available
 Ready False Wed, 24 Apr 2019 06:40:46 -0400 Wed, 24 Apr 2019 05:44:38 -0400 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
......
可以看到 NodeNotReady 的原因在於,我們尚未部署任 何網絡插件。
我們還可以通過 kubectl 檢查這個節點上各個系統 Pod 的狀態,其中,kube-system 是 Kubernetes 項目預留的系統 Pod 的工作空間(Namepsace,注意它並不是 Linux Namespace, 它只是 Kubernetes 劃分不同工作空間的單位):
[root@centos7vm ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-swp2s 0/1 Pending 0 65m
coredns-fb8b8dccf-wcftx 0/1 Pending 0 65m
etcd-centos7vm 1/1 Running 0 64m
kube-apiserver-centos7vm 1/1 Running 0 64m
kube-controller-manager-centos7vm 1/1 Running 0 64m
kube-proxy-xhlxf 1/1 Running 0 65m
kube-scheduler-centos7vm 1/1 Running 0 64m
可以看到,CoreDNS、kube-controller-manager 等依賴於網絡的 Pod 都處於 Pending 狀態, 即調度失敗。這當然是符合預期的:因爲這個 Master 節點的網絡尚未就緒。

8.安裝網絡插件weave-kube

[root@centos7vm ~]# kubectl apply -f https://git.io/weave-kube-1.6
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.extensions/weave-net created
稍等一會兒,重新檢查Pod的狀態
[root@centos7vm ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-swp2s 1/1 Running 0 83m
coredns-fb8b8dccf-wcftx 1/1 Running 0 83m
etcd-centos7vm 1/1 Running 0 82m
kube-apiserver-centos7vm 1/1 Running 0 82m
kube-controller-manager-centos7vm 1/1 Running 0 82m
kube-proxy-xhlxf 1/1 Running 0 83m
kube-scheduler-centos7vm 1/1 Running 0 82m
weave-net-8s4cl 2/2 Running 0 2m5s
可以看到,所有的系統Pod都成功啓動了,而剛剛部署的Weave網絡插件則在kube-system下 面新建了一個名叫 weave-net-8s4cl 的 Pod,一般來說,這些Pod就是容器網絡插件在每個節 點上的控制組件。
Kubernetes支持容器網絡插件,使用的是一個名叫CNI的通用接口,它也是當前容器網絡的事實標準,市面上的所有容器網絡開源項目都可以通過CNI接入Kubernetes,比如Flannel、Calico、Canal、Romana等等,它們的部署方式也都是類似的。

至此,Kubernetes的Master節點就部署完成了。如果你只需要一個單節點的Kubernetes,現在你就可以使用了。不過,在默認情況下,Kubernetes的Master節點是不能運行用戶Pod的,所以還需要額外做一個小操作。如下介紹

9.部署 Kubernetes 的 Worker 節點

部署Kubernetes的Worker節點
Kubernetes的Worker節點跟Master節點幾乎是相同的,它們運行着的都是一個kubelet組件。唯一的區別在於,在kubeadminit的過程中,kubelet啓動後,Master節點上還會自動運行kube-apiserver、kube-scheduler、kube-controller-manger這三個系統Pod。
所以,相比之下,部署Worker節點反而是最簡單的,只需要兩步即可完成。第一步,在所有Worker節點上執行“安裝kubeadm和Docker”一節的所有步驟。第二步,執行部署Master節點時生成的kubeadm join指令:
kubeadm join 172.16.0.222:6443 --token smz672.4uxlpw056eykpqi3 \
 --discovery-token-ca-cert-hash sha256:8c0b46999ce4cb50f9add92c7c6b28b15fdfeec49c9b09e605e227e667bc0e6b
如果安裝完master節點後24小時內沒有將work加入,則需要重新生成token
kubeadm token create 生成
kubeadm token list 查看
然後使用kubeadm join加入

10.部署kubernetes-dashboard

[root@centos7vm manifests]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.0/src/deploy/recommended/kubernetes-dashboard.yaml
修改下載的kubernetes-dashboard.yaml文件,更改RoleBinding修改爲ClusterRoleBinding,並且修改roleRef中的kind和name,用cluster-admin這個非常牛逼的CusterRole(超級使用戶權限,其擁有訪問kube-apiserver的所有權限)。如下:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 name: kubernetes-dashboard-minimal
 namespace: kube-system
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: cluster-admin
subjects:
- kind: ServiceAccount
 name: kubernetes-dashboard
 namespace: kube-system

---
[root@centos7vm manifests]# kubectl apply -f kubernetes-dashboard.yaml 
部署完成後查看pod狀態
[root@centos7vm ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-swp2s 1/1 Running 3 16h
coredns-fb8b8dccf-wcftx 1/1 Running 3 16h
etcd-centos7vm 1/1 Running 3 16h
kube-apiserver-centos7vm 1/1 Running 3 16h
kube-controller-manager-centos7vm 1/1 Running 4 16h
kube-proxy-hpz9k 1/1 Running 0 13h
kube-proxy-xhlxf 1/1 Running 3 16h
kube-scheduler-centos7vm 1/1 Running 3 16h
kubernetes-dashboard-5f7b999d65-c759c 1/1 Running 0 2m34s
weave-net-8s4cl 2/2 Running 9 15h
weave-net-d67vs 2/2 Running 1 13h

創建dashboard用戶請參考:https://github.com/kubernetes/dashboard/wiki/Creating-sample-user
用戶創建好之後(To access Dashboard from your local workstation you must create a secure channel to your Kubernetes cluster. Run the following command:)
[root@centos7vm manifests]# kubectl proxy --address=0.0.0.0 --disable-filter=true &
然後瀏覽器中訪問
http://172.16.0.222:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login

11.監控組件 – prometheus-operator 部署(https://github.com/coreos/prometheus-operator)

[root@centos7vm ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/master/bundle.yaml
Note: make sure to adapt the namespace in the ClusterRoleBinding if deploying in a namespace other than the default namespace.
部署完成後dashboard效果圖

kubernetes-deployment

遇到的問題:

問題一:docker pull images的時候連接超時:需要配置docker proxy,參考第2步設置docker代理便能解決

docker pull images的時候連接超時:需要配置docker proxy,參考第2步設置docker代理便能解決
[root@centos7vm ~]# kubeadm init
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
 [WARNING HTTPProxy]: Connection to "https://172.16.0.222" uses proxy "http://127.0.0.1:8118". If that is not intended, adjust your proxy settings
 [WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://127.0.0.1:8118". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
 [WARNING Hostname]: hostname "centos7vm" could not be reached
 [WARNING Hostname]: hostname "centos7vm": lookup centos7vm on 8.8.8.8:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.14.1: output: Trying to pull repository k8s.gcr.io/kube-apiserver ... 
Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.203.82:443: i/o timeout
, error: exit status 1
 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.14.1: output: Trying to pull repository k8s.gcr.io/kube-controller-manager ... 
Get https://k8s.gcr.io/v1/_ping: dial tcp 108.177.125.82:443: i/o timeout
, error: exit status 1
 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.14.1: output: Trying to pull repository k8s.gcr.io/kube-scheduler ... 
Get https://k8s.gcr.io/v1/_ping: dial tcp 108.177.125.82:443: i/o timeout
, error: exit status 1
 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.14.1: output: Trying to pull repository k8s.gcr.io/kube-proxy ... 
Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.203.82:443: i/o timeout
, error: exit status 1
 [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Trying to pull repository k8s.gcr.io/pause ... 
Get https://k8s.gcr.io/v1/_ping: dial tcp 64.233.188.82:443: i/o timeout
, error: exit status 1
 [ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.3.10: output: Trying to pull repository k8s.gcr.io/etcd ... 
Get https://k8s.gcr.io/v1/_ping: dial tcp 64.233.188.82:443: i/o timeout
, error: exit status 1
 [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.3.1: output: Trying to pull repository k8s.gcr.io/coredns ... 
Get https://k8s.gcr.io/v1/_ping: dial tcp 64.233.188.82:443: i/o timeout
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

問題二:kubeadm init時報kubelet未啓動

kubeadm init時報kubelet未啓動

[root@centos7vm ~]# kubeadm init
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
 [WARNING HTTPProxy]: Connection to "https://172.16.0.222" uses proxy "http://127.0.0.1:8118". If that is not intended, adjust your proxy settings
 [WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://127.0.0.1:8118". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
 [WARNING Hostname]: hostname "centos7vm" could not be reached
 [WARNING Hostname]: hostname "centos7vm": lookup centos7vm on 8.8.8.8:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [centos7vm kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.0.222]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [centos7vm localhost] and IPs [172.16.0.222 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [centos7vm localhost] and IPs [172.16.0.222 127.0.0.1 ::1]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
 timed out waiting for the condition

This error is likely caused by:
 - The kubelet is not running
 - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
 - 'systemctl status kubelet'
 - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
 - 'docker ps -a | grep kube | grep -v pause'
 Once you have found the failing container, you can inspect its logs with:
 - 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster

於是通過systemctl status kubelet查看kubelet的狀態
[root@centos7vm ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
 Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
 Drop-In: /usr/lib/systemd/system/kubelet.service.d
 └─10-kubeadm.conf
 Active: active (running) since Wed 2019-04-24 01:03:57 EDT; 9min ago
 Docs: https://kubernetes.io/docs/
 Main PID: 13947 (kubelet)
 CGroup: /system.slice/kubelet.service
 └─13947 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --confi...

Apr 24 01:13:23 centos7vm kubelet[13947]: E0424 01:13:23.528123 13947 kubelet.go:2244] node "centos7vm" not found
Apr 24 01:13:23 centos7vm kubelet[13947]: E0424 01:13:23.556646 13947 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed... refused
Apr 24 01:13:23 centos7vm kubelet[13947]: E0424 01:13:23.636400 13947 kubelet.go:2244] node "centos7vm" not found
Apr 24 01:13:23 centos7vm kubelet[13947]: I0424 01:13:23.636889 13947 kubelet_node_status.go:283] Setting node annotation to enable volum...h/detach
Apr 24 01:13:23 centos7vm kubelet[13947]: E0424 01:13:23.637776 13947 controller.go:115] failed to ensure node lease exists, will retry i... refused
Apr 24 01:13:23 centos7vm kubelet[13947]: I0424 01:13:23.639660 13947 kubelet_node_status.go:72] Attempting to register node centos7vm
Apr 24 01:13:23 centos7vm kubelet[13947]: E0424 01:13:23.699581 13947 kubelet_node_status.go:94] Unable to register node "centos7vm" with... refused
Apr 24 01:13:23 centos7vm kubelet[13947]: E0424 01:13:23.737079 13947 kubelet.go:2244] node "centos7vm" not found
Apr 24 01:13:23 centos7vm kubelet[13947]: W0424 01:13:23.799325 13947 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
Apr 24 01:13:23 centos7vm kubelet[13947]: E0424 01:13:23.800451 13947 kubelet.go:2170] Container runtime network not ready: NetworkReady=...tialized
Hint: Some lines were ellipsized, use -l to show in full.

使用docker ps -a | grep kube | grep -v pause查看容器狀態
[root@centos7vm ~]# docker ps -a | grep kube | grep -v pause
a7bd1c323bfa 2c4adeb21b4f "etcd --advertise-..." 3 minutes ago Exited (1) 2 minutes ago k8s_etcd_etcd-centos7vm_kube-system_0298d5694df46086cda3a73b7025fd1a_6
4ff843a990ac efb3887b411d "kube-controller-m..." 3 minutes ago Exited (1) 3 minutes ago k8s_kube-controller-manager_kube-controller-manager-centos7vm_kube-system_b9130a6f5c1174f73db1e98992b49b1c_6
0ae5b5dd5df4 cfaa4ad74c37 "kube-apiserver --..." 3 minutes ago Exited (1) 3 minutes ago k8s_kube-apiserver_kube-apiserver-centos7vm_kube-system_c125074e5c436480a1e85165a5af5b9a_6
484743a36cae 8931473d5bdb "kube-scheduler --..." 8 minutes ago Up 8 minutes k8s_kube-scheduler_kube-scheduler-centos7vm_kube-system_f44110a0ca540009109bfc32a7eb0baa_0
查看日誌docker logs a7bd1c323bfa,發現/etc/kubernetes/pki沒有權限
[root@centos7vm ~]# docker logs a7bd1c323bfa
2019-04-24 07:05:54.108645 I | etcdmain: etcd Version: 3.3.10
2019-04-24 07:05:54.108695 I | etcdmain: Git SHA: 27fc7e2
2019-04-24 07:05:54.108698 I | etcdmain: Go Version: go1.10.4
2019-04-24 07:05:54.108700 I | etcdmain: Go OS/Arch: linux/amd64
2019-04-24 07:05:54.108703 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2019-04-24 07:05:54.108747 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, ca = , trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = 
2019-04-24 07:05:54.109323 C | etcdmain: open /etc/kubernetes/pki/etcd/peer.crt: permission denied

發現是SELinux的原因(參考這個文章得到思路)
解決方案如下:
[root@centos7vm ~]# sestatus
SELinux status: enabled
於是關閉selinux(setenforce 0 在我的centos7.6上不起作用)
vim /etc/selinux/config
將SELINUX=enforcing改爲SELINUX=disabled 
設置後需要重啓才能生效
[root@centos7vm ~]# sestatus
SELinux status: disabled

#kubeadm reset
#kubeadm init 
成功解決

轉載請註明:lampNick » centos7.6使用kubeadm安裝kubernetes的master worker節點筆記及遇到的坑

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章