官方原文: https://www.kubernetes.org.cn/5551.html
kubeadm是Kubernetes官方提供的用於快速安裝Kubernetes集羣的工具,伴隨Kubernetes每個版本的發佈都會同步更新,kubeadm會對集羣配置方面的一些實踐做調整,通過實驗kubeadm可以學習到Kubernetes官方在集羣配置上一些新的最佳實踐。
最近發佈的Kubernetes 1.15中,kubeadm對HA集羣的配置已經達到beta可用,說明kubeadm距離生產環境中可用的距離越來越近了。
Kubernetes集羣組件:
etcd 一個高可用的K/V鍵值對存儲和服務發現系統
flannel 實現誇主機的容器網絡的通信
kube-apiserver 提供kubernetes集羣的API調用
kube-controller-manager 確保集羣服務
kube-scheduler 調度容器,分配到Node
kubelet 在Node節點上按照配置文件中定義的容器規格啓動容器
kube-proxy 提供網絡代理服務
一、環境介紹
主機名 | IP地址 |
k8s-master | 192.168.169.21 |
k8s-node1 | 192.168.169.24 |
k8s-node2 | 192.168.169.25 |
k8s-node3 | 192.168.169.26 |
1、操作系統: CensOS7.6
[root@k8s-master ~]# cat /etc/redhat-release CentOS Linux release 7.6.1810 (Core)
2、Kubernetes版本 v1.15.0
kube-apiserver v1.15.0 kube-controller-manager v1.15.0 kube-proxy v1.15.0 kube-scheduler v1.15.0 etcd 3.3.10 pause 3.1 coredns 1.3.1
二、準備
2.1系統配置
在安裝之前,需要先做如下準備。4臺CentOS 7.6主機如下:
升級系統
# yum -y update
配置Host
# cat /etc/hosts 127.0.0.1 localhost 192.168.1.21 k8s-master 192.168.1.24 k8s-node1 192.168.1.25 k8s-node2 192.168.1.26 k8s-node3
如果各個主機啓用了防火牆,需要開放Kubernetes各個組件所需要的端口,可以查看Installing kubeadm中的”Check required ports”一節。 這裏簡單起見在各節點禁用防火牆:
# systemctl stop firewalld # systemctl disable firewalld
禁用SELINUX:
# setenforce 0 # sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config # SELINUX=disabled
創建/etc/sysctl.d/k8s.conf文件,添加如下內容:
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1
執行命令使修改生效。
# modprobe br_netfilter # sysctl -p /etc/sysctl.d/k8s.conf
2.2kube-proxy開啓ipvs的前置條件
由於ipvs已經加入到了內核的主幹,所以爲kube-proxy開啓ipvs的前提需要加載以下的內核模塊:
ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack_ipv4
在所有的Kubernetes節點上執行以下腳本:
# cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF # chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
上面腳本創建了的/etc/sysconfig/modules/ipvs.modules文件,保證在節點重啓後能自動加載所需模塊。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已經正確加載所需的內核模塊。
接下來還需要確保各個節點上已經安裝了ipset軟件包
# yum -y install ipset
爲了便於查看ipvs的代理規則,最好安裝一下管理工具ipvsadm
# yum -y install ipvsadm
如果以上前提條件如果不滿足,則即使kube-proxy的配置開啓了ipvs模式,也會退回到iptables模式
2.3安裝Docker
Kubernetes從1.6開始使用CRI(Container Runtime Interface)容器運行時接口。默認的容器運行時仍然是Docker,使用的是kubelet中內置dockershim CRI實現。
安裝docker的yum源:
# yum install -y yum-utils device-mapper-persistent-data lvm2 # yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo
查看最新的Docker版本:
# yum list docker-ce.x86_64 --showduplicates |sort -r docker-ce.x86_64 3:18.09.7-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.6-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.5-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.4-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.3-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.2-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.1-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stable docker-ce.x86_64 18.06.3.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.06.2.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable
Kubernetes 1.15當前支持的docker版本列表是1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09。 這裏在各節點安裝docker的18.09.7版本。
# yum makecache fast # yum install -y --setopt=obsoletes=0 docker-ce # systemctl start docker # systemctl enable docker
安裝指定版本docker
yum install -y –setopt=obsoletes=0 \ docker-ce-18.09.7-3.el7
確認一下iptables filter表中FOWARD鏈的默認策略(pllicy)爲ACCEPT。
# iptables -nvL Chain INPUT (policy ACCEPT 263 packets, 19209 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED 0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0
2.4 修改docker cgroup driver爲systemd
根據文檔CRI installation中的內容,對於使用systemd作爲init system的Linux的發行版,使用systemd作爲docker的cgroup driver可以確保服務器節點在資源緊張的情況更加穩定,因此這裏修改各個節點上docker的cgroup driver爲systemd。
創建或修改/etc/docker/daemon.json:
{ "exec-opts": ["native.cgroupdriver=systemd"] }
*如果機器是代理上網,需要配置docker的http代理:
# mkdir /etc/systemd/system/docker.service.d
# vim /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment=”HTTP_PROXY=http://192.168.1.1:3128″
重啓docker:
# systemctl daemon-reload # systemctl restart docker # docker info | grep Cgroup Cgroup Driver: systemd # systemctl show docker --property Environment
三、使用kubeadm部署Kubernetes
3.1 安裝kubeadm和kubelet
Master配置
安裝kubeadm和kubelet:
3.1.1、配置kubernetes.repo的源,由於官方源國內無法訪問,這裏使用阿里雲yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
測試地址https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64是否可用,如果不可用需要×××
# curl https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
# yum -y makecache fast # yum install -y kubelet kubeadm kubectl ... Installed: kubeadm.x86_64 0:1.15.0-0 kubectl.x86_64 0:1.15.0-0 kubelet.x86_64 0:1.15.0-0 Dependency Installed: conntrack-tools.x86_64 0:1.4.4-4.el7 cri-tools.x86_64 0:1.12.0-0 kubernetes-cni.x86_64 0:0.7.5-0 libnetfilter_cthelper.x86_64 0:1.0.0-9.el7 libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7 libnetfilter_queue.x86_64 0:1.0.2-2.el7_2
從安裝結果可以看出還安裝了cri-tools, kubernetes-cni, socat三個依賴:
官方從Kubernetes 1.14開始將cni依賴升級到了0.7.5版本
socat是kubelet的依賴
cri-tools是CRI(Container Runtime Interface)容器運行時接口的命令行工具
運行kubelet –help可以看到原來kubelet的絕大多數命令行flag參數都被DEPRECATED了,如:
...... --address 0.0.0.0 The IP address for the Kubelet to serve on (set to 0.0.0.0 for all IPv4 interfaces and `::` for all IPv6 interfaces) (default 0.0.0.0) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ......
而官方推薦我們使用–config指定配置文件,並在配置文件中指定原來這些flag所配置的內容。具體內容可以查看這裏Set Kubelet parameters via a config file。這也是Kubernetes爲了支持動態Kubelet配置(Dynamic Kubelet Configuration)才這麼做的,參考Reconfigure a Node’s Kubelet in a Live Cluster。
kubelet的配置文件必須是json或yaml格式,具體可查看這裏。
Kubernetes 1.8開始要求關閉系統的Swap,如果不關閉,默認配置下kubelet將無法啓動。 關閉系統的Swap方法如下:
# swapoff -a
修改 /etc/fstab 文件,註釋掉 SWAP 的自動掛載,
# UUID=2d1e946c-f45d-4516-86cf-946bde9bdcd8 swap swap defaults 0 0
使用free -m確認swap已經關閉。 swappiness參數調整,修改/etc/sysctl.d/k8s.conf添加下面一行:
vm.swappiness=0
使修改生效
# sysctl -p /etc/sysctl.d/k8s.conf
3.2 使用kubeadm init初始化集羣
開機啓動kubelet服務:
systemctl enable kubelet.service
配置Master節點
# mkdir working && cd working
生成配置文件
# kubeadm config print init-defaults ClusterConfiguration > kubeadm.yaml
修改配置文件
# vim kubeadm.yaml # 修改imageRepository:k8s.gcr.io imageRepository: registry.aliyuncs.com/google_containers # 修改KubernetesVersion:v1.15.0 kubernetesVersion: v1.15.0 # 配置MasterIP advertiseAddress: 192.168.1.21 # 配置子網網絡 networking: dnsDomain: cluster.local podSubnet: 10.244.0.0/16 serviceSubnet: 10.96.0.0/12 scheduler: {}
使用kubeadm默認配置初始化的集羣,會在master節點打上node-role.kubernetes.io/master:NoSchedule的污點,阻止master節點接受調度運行工作負載。這裏測試環境只有兩個節點,所以將這個taint修改爲node-role.kubernetes.io/master:PreferNoSchedule。
在開始初始化集羣之前可以使用kubeadm config images pull預先在各個節點上拉取所k8s需要的docker鏡像。
接下來使用kubeadm初始化集羣,選擇node1作爲Master Node,在node1上執行下面的命令:
# kubeadm init --config kubeadm.yaml --ignore-preflight-errors=Swap .......... Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.1.21:6443 --token 4qcl2f.gtl3h8e5kjltuo0r \ --discovery-token-ca-cert-hash sha256:7ed5404175cc0bf18dbfe53f19d4a35b1e3d40c19b10924275868ebf2a3bbe6e
注意這一條命令需要保存好(添加集羣使用)
kubeadm join 192.168.169.21:6443 –token 4qcl2f.gtl3h8e5kjltuo0r \ –discovery-token-ca-cert-hash sha256:7ed5404175cc0bf18dbfe53f19d4a35b1e3d40c19b10924275868ebf2a3bbe6e
上面記錄了完成的初始化輸出的內容,根據輸出的內容基本上可以看出手動初始化安裝一個Kubernetes集羣所需要的關鍵步驟。 其中有以下關鍵內容:
[kubelet-start] 生成kubelet的配置文件”/var/lib/kubelet/config.yaml”
[certs]生成相關的各種證書
[kubeconfig]生成相關的kubeconfig文件
[control-plane]使用/etc/kubernetes/manifests目錄中的yaml文件創建apiserver、controller-manager、scheduler的靜態pod
[bootstraptoken]生成token記錄下來,後邊使用kubeadm join往集羣中添加節點時會用到
下面的命令是配置常規用戶如何使用kubectl訪問集羣:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
最後給出了將節點加入集羣的命令kubeadm join 192.168.169.21:6443 –token 4qcl2f.gtl3h8e5kjltuo0r \ –discovery-token-ca-cert-hash sha256:7ed5404175cc0bf18dbfe53f19d4a35b1e3d40c19b10924275868ebf2a3bbe6e
查看一下集羣狀態,確認個組件都處於healthy狀態:
# kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"}
集羣初始化如果遇到問題,可以使用下面的命令進行清理:
# kubeadm reset # ifconfig cni0 down # ip link delete cni0 # ifconfig flannel.1 down # ip link delete flannel.1 # rm -rf /var/lib/cni/
3.3 安裝Pod Network
接下來安裝flannel network add-on:
# mkdir -p ~/k8s/ # cd ~/k8s # curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml # kubectl apply -f kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds-amd64 created daemonset.extensions/kube-flannel-ds-arm64 created daemonset.extensions/kube-flannel-ds-arm created daemonset.extensions/kube-flannel-ds-ppc64le created daemonset.extensions/kube-flannel-ds-s390x created
這裏注意kube-flannel.yml這個文件裏的flannel的鏡像是0.11.0,quay.io/coreos/flannel:v0.11.0-amd64
如果Node有多個網卡的話,參考flannel issues 39701,目前需要在kube-flannel.yml中使用–iface參數指定集羣主機內網網卡的名稱,否則可能會出現dns無法解析。需要將kube-flannel.yml下載到本地,flanneld啓動參數加上–iface=<iface-name>
containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-amd64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr - --iface=eth1 ......
使用kubectl get pod –all-namespaces -o wide確保所有的Pod都處於Running狀態。
# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-5c98db65d4-dr8lf 1/1 Running 0 52m coredns-5c98db65d4-lp8dg 1/1 Running 0 52m etcd-node1 1/1 Running 0 51m kube-apiserver-node1 1/1 Running 0 51m kube-controller-manager-node1 1/1 Running 0 51m kube-flannel-ds-amd64-mm296 1/1 Running 0 44s kube-proxy-kchkf 1/1 Running 0 52m kube-scheduler-node1 1/1 Running 0 51m
3.4 測試集羣DNS是否可用
# kubectl run curl --image=radial/busyboxplus:curl -it kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead. If you don't see a command prompt, try pressing enter. [ root@curl-5cc7b478b6-r997p:/ ]$
注:在此過程中可能會出現curl容器一直處於pending狀態,報錯信息如下:
0/1 nodes are available: 1 node(s) had taints that the pod didn’t tolerate.
解決方法:
# kubectl taint nodes –all node-role.kubernetes.io/master-
進入後執行nslookup kubernetes.default確認解析正常:
$ nslookup kubernetes.default Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes.default Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
Node節點配置
安裝docker的yum源:
# yum install -y yum-utils device-mapper-persistent-data lvm2 # yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo # yum install -y --setopt=obsoletes=0 docker-ce
安裝kubeadm和kubelet:
配置kubernetes.repo的源,由於官方源國內無法訪問,這裏使用阿里雲yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
測試地址https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64是否可用,如果不可用需要×××
# curl https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
# yum -y makecache fast # yum install -y kubelet kubeadm kubectl ... Installed: kubeadm.x86_64 0:1.15.0-0 kubectl.x86_64 0:1.15.0-0 kubelet.x86_64 0:1.15.0-0 Dependency Installed: conntrack-tools.x86_64 0:1.4.4-4.el7 cri-tools.x86_64 0:1.12.0-0 kubernetes-cni.x86_64 0:0.7.5-0 libnetfilter_cthelper.x86_64 0:1.0.0-9.el7 libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7 libnetfilter_queue.x86_64 0:1.0.2-2.el7_2
# swapoff -a
修改 /etc/fstab 文件,註釋掉 SWAP 的自動掛載,
# UUID=2d1e946c-f45d-4516-86cf-946bde9bdcd8 swap swap defaults 0 0
使用free -m確認swap已經關閉。 swappiness參數調整,修改/etc/sysctl.d/k8s.conf添加下面一行:
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 vm.swappiness=0
使修改生效
# sysctl -p /etc/sysctl.d/k8s.conf
下面將node1這個主機添加到Kubernetes集羣中,在node1上執行:
# kubeadm join 192.168.1.21:6443 --token 4qcl2f.gtl3h8e5kjltuo0r \ --discovery-token-ca-cert-hash sha256:7ed5404175cc0bf18dbfe53f19d4a35b1e3d40c19b10924275868ebf2a3bbe6e \ --ignore-preflight-errors=Swap [preflight] Running pre-flight checks [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
node1加入集羣很是順利,下面在master節點上執行命令查看集羣中的節點:
# kubectl get node NAME STATUS ROLES AGE VERSION node1 Ready master 57m v1.15.0 node2 Ready <none> 11s v1.15.0
如何從集羣中移除Node
如果需要從集羣中移除node2這個Node執行下面的命令:
在master節點上執行:
# kubectl drain node2 –delete-local-data –force –ignore-daemonsets
# kubectl delete node node2
在node2上執行:
# kubeadm reset
# ifconfig cni0 down
# ip link delete cni0
# ifconfig flannel.1 down
# ip link delete flannel.1
# rm -rf /var/lib/cni/
報錯:
error execution phase preflight: couldn’t validate the identity of the API Server: abort connecting to API servers after timeout of 5m0s
# kubeadm join ……
error execution phase preflight: couldn’t validate the identity of the API Server: abort connecting to API servers after timeout of 5m0
原因:master節點的token過期了
解決:重新生成新token
在master重新生成token
# kubeadm token create
424mp7.nkxx07p940mkl2nd
# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed ‘s/^.* //’
d88fb55cb1bd659023b11e61052b39bbfe99842b0636574a16c76df186fd5e0d
Node節點重新join就可以了
kubeadm join 192.168.169.21:6443 –token 424mp7.nkxx07p940mkl2nd \--discovery-token-ca-cert-hash sha256:d88fb55cb1bd659023b11e61052b39bbfe99842b0636574a16c76df186fd5e0d
四、kube-proxy開啓ipvs
修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: “ipvs”
# kubectl edit cm kube-proxy -n kube-system minSyncPeriod: 0s scheduler: "" syncPeriod: 30s kind: KubeProxyConfiguration metricsBindAddress: 127.0.0.1:10249 mode: "ipvs" # 加上這個 nodePortAddresses: null
其中mode原來是空,默認爲iptables模式,改爲ipvs
scheduler默認是空,默認負載均衡算法爲輪訓
編輯完,保存退出
刪除所有kube-proxy的pod
# kubectl delete pod xxx -n kube-system
之後重啓各個節點上的kube-proxy pod:
# kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
# kubectl get pod -n kube-system | grep kube-proxy kube-proxy-7fsrg 1/1 Running 0 3s kube-proxy-k8vhm 1/1 Running 0 9s # kubectl logs kube-proxy-7fsrg -n kube-system I0703 04:42:33.308289 1 server_others.go:170] Using ipvs Proxier. W0703 04:42:33.309074 1 proxier.go:401] IPVS scheduler not specified, use rr by default I0703 04:42:33.309831 1 server.go:534] Version: v1.15.0 I0703 04:42:33.320088 1 conntrack.go:52] Setting nf_conntrack_max to 131072 I0703 04:42:33.320365 1 config.go:96] Starting endpoints config controller I0703 04:42:33.320393 1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller I0703 04:42:33.320455 1 config.go:187] Starting service config controller I0703 04:42:33.320470 1 controller_utils.go:1029] Waiting for caches to sync for service config controller I0703 04:42:33.420899 1 controller_utils.go:1036] Caches are synced for endpoints config controller I0703 04:42:33.420969 1 controller_utils.go:1036] Caches are synced for service config controller
日誌中打印出了Using ipvs Proxier,說明ipvs模式已經開啓。
五、Kubernetes常用組件部署
越來越多的公司和團隊開始使用Helm這個Kubernetes的包管理器,這裏也將使用Helm安裝Kubernetes的常用組件。
5.1、Helm的安裝
Helm由客戶端命helm令行工具和服務端tiller組成,Helm的安裝十分簡單。 下載helm命令行工具到master節點node1的/usr/local/bin下,這裏下載的2.14.1版本
# curl -O https://get.helm.sh/helm-v2.14.1-linux-amd64.tar.gz # tar -zxvf helm-v2.14.1-linux-amd64.tar.gz # cd linux-amd64/ # cp helm /usr/local/bin/
爲了安裝服務端tiller,還需要在這臺機器上配置好kubectl工具和kubeconfig文件,確保kubectl工具可以在這臺機器上訪問apiserver且正常使用。 這裏的node1節點已經配置好了kubectl。
因爲Kubernetes APIServer開啓了RBAC訪問控制,所以需要創建tiller使用的service account: tiller並分配合適的角色給它。 詳細內容可以查看helm文檔中的Role-based Access Control。 這裏簡單起見直接分配cluster-admin這個集羣內置的ClusterRole給它。創建helm-rbac.yaml文件
apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system
# kubectl create -f helm-rbac.yaml serviceaccount/tiller created clusterrolebinding.rbac.authorization.k8s.io/tiller created
接下來使用helm部署tiller:
# helm init --service-account tiller --skip-refresh Creating /root/.helm Creating /root/.helm/repository Creating /root/.helm/repository/cache Creating /root/.helm/repository/local Creating /root/.helm/plugins Creating /root/.helm/starters Creating /root/.helm/cache/archive Creating /root/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /root/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. To prevent this, run `helm init` with the --tiller-tls-verify flag. For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation Happy Helming!
tiller默認被部署在k8s集羣中的kube-system這個namespace下:
# kubectl get pod -n kube-system -l app=helm NAME READY STATUS RESTARTS AGE tiller-deploy-c4fd4cd68-dwkhv 1/1 Running 0 83s
注:如果tiller的狀態一直是ErrImagePull的時候,需要更換國內helm源。
NAME READY STATUS RESTARTS AGE
tiller-deploy-7bf78cdbf7-fkx2z 0/1 ImagePullBackOff 0 79s
解決方法1:
1、刪除默認源
# helm repo remove stable
2、 增加新的國內鏡像源
#helm repo add stable https://burdenbear.github.io/kube-charts-mirror/
或
#helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
3、查看helm源情況
# helm repo list
4、搜索測試
# helm search mysql
解決方法2:
1、手動下載images
# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.1
2、查看tiller需要的鏡像名
# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-bccdc95cf-tb6pf 1/1 Running 3 5h21m
coredns-bccdc95cf-xpgm8 1/1 Running 3 5h21m
etcd-master 1/1 Running 3 5h20m
kube-apiserver-master 1/1 Running 3 5h21m
kube-controller-manager-master 1/1 Running 3 5h21m
kube-flannel-ds-amd64-b4ksb 1/1 Running 3 5h18m
kube-flannel-ds-amd64-vmv29 1/1 Running 0 127m
kube-proxy-67zn6 1/1 Running 2 37m
kube-proxy-992ns 1/1 Running 0 37m
kube-scheduler-master 1/1 Running 3 5h21m
tiller-deploy-7bf78cdbf7-fkx2z 0/1 ImagePullBackOff 0 33m
3、使用describe查看鏡像名
# kubectl describe pods tiller-deploy-7bf78cdbf7-fkx2z -n kube-system
……….
Normal Scheduled 32m default-scheduler Successfully assigned kube-system/tiller-deploy-7bf78cdbf7-fkx2z to node1
Normal Pulling 30m (x4 over 32m) kubelet, node1 Pulling image “gcr.io/kubernetes-helm/tiller:v2.14.1”
Warning Failed 30m (x4 over 31m) kubelet, node1 Failed to pull image “gcr.io/kubernetes-helm/tiller:v2.14.1”: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: Service Unavailable
Warning Failed 30m (x4 over 31m) kubelet, node1 Error: ErrImagePull
Warning Failed 30m (x6 over 31m) kubelet, node1 Error: ImagePullBackOff
Normal BackOff 111s (x129 over 31m) kubelet, node1 Back-off pulling image “gcr.io/kubernetes-helm/tiller:v2.14.1”
4、使用docker tag 重命令鏡像
# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.1 gcr.io/kubernetes-helm/tiller:v2.14.1
5、刪除多餘的鏡像
# docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.1
6、刪除失敗的pod
# kubectl delete deployment tiller-deploy -n kube-system
稍等一會兒就可以使用kubectl get pods -n kube-system查看狀態已經正常了
# helm version Client: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"}
注意由於某些原因需要網絡可以訪問gcr.io和kubernetes-charts.storage.googleapis.com,如果無法訪問可以通過helm init –service-account tiller –tiller-image <your-docker-registry>/tiller:v2.13.1 –skip-refresh使用私有鏡像倉庫中的tiller鏡像
最後在node1上修改helm chart倉庫的地址爲azure提供的鏡像地址:
# helm repo add stable http://mirror.azure.cn/kubernetes/charts "stable" has been added to your repositories # helm repo list NAME URL stable http://mirror.azure.cn/kubernetes/charts local http://127.0.0.1:8879/charts
5.2、使用Helm部署Nginx Ingress
爲了便於將集羣中的服務暴露到集羣外部,需要使用Ingress。接下來使用Helm將Nginx Ingress部署到Kubernetes上。 Nginx Ingress Controller被部署在Kubernetes的邊緣節點上,關於Kubernetes邊緣節點的高可用相關的內容可以查看之前整理的Bare metal環境下Kubernetes Ingress邊緣節點的高可用,Ingress Controller使用hostNetwork。
我們將master(192.168.1.21)做爲邊緣節點,打上Label:
# kubectl label node master node-role.kubernetes.io/edge= node/master labeled # kubectl get node NAME STATUS ROLES AGE VERSION master Ready edge,master 138m v1.15.0 node1 Ready <none> 82m v1.15.0
如果想刪除一個node的label標記,使用以下命令
# kubectl label node node1 node-role.kubernetes.io/edge-
創建 ingress-nginx.yaml
stable/nginx-ingress chart的值文件ingress-nginx.yaml如下:
controller: replicaCount: 1 hostNetwork: true nodeSelector: node-role.kubernetes.io/edge: '' affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - nginx-ingress - key: component operator: In values: - controller topologyKey: kubernetes.io/hostname tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule - key: node-role.kubernetes.io/master operator: Exists effect: PreferNoSchedule defaultBackend: nodeSelector: node-role.kubernetes.io/edge: '' tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule - key: node-role.kubernetes.io/master operator: Exists effect: PreferNoSchedule
nginx ingress controller的副本數replicaCount爲1,將被調度到master這個邊緣節點上。這裏並沒有指定nginx ingress controller service的externalIPs,而是通過hostNetwork: true設置nginx ingress controller使用宿主機網絡。
# helm repo update # helm install stable/nginx-ingress \ -n nginx-ingress \ --namespace ingress-nginx \ -f ingress-nginx.yaml
# kubectl get pod -n ingress-nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-ingress-controller-cc9b6d55b-pr8vr 1/1 Running 0 10m 192.168.1.21 node1 <none> <none> nginx-ingress-default-backend-cc888fd56-bf4h2 1/1 Running 0 10m 10.244.0.14 node1 <none> <none>
如果發現nginx-ingress的容器狀態是ContainersCreating/ImagePullBackOff,則需要手動下載鏡像
# docker pull registry.aliyuncs.com/google_containers/nginx-ingress-controller:0.25.0
# docker tag registry.aliyuncs.com/google_containers/nginx-ingress-controller:0.25.0 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0
# docker pull registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/defaultbackend-amd64:1.5
# docker tag registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/defaultbackend-amd64:1.5 k8s.gcr.io/defaultbackend-amd64:1.5
# docker rmi registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/defaultbackend-amd64:1.5
# docker rim registry.aliyuncs.com/google_containers/nginx-ingress-controller:0.25.0
如果訪問http://192.168.1.21返回default backend,則部署完成。
5.3、 使用Helm部署dashboard
創建 kubernetes-dashboard.yaml:
image: repository: registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64 tag: v1.10.1 ingress: enabled: true hosts: - k8s.frognew.com #這裏是你將來訪問dashboard的域名 annotations: nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" tls: - secretName: frognew-com-tls-secret hosts: - k8s.frognew.com nodeSelector: node-role.kubernetes.io/edge: '' tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule - key: node-role.kubernetes.io/master operator: Exists effect: PreferNoSchedule rbac: clusterAdminRole: true
執行安裝
helm install stable/kubernetes-dashboard \ -n kubernetes-dashboard \ --namespace kube-system \ -f kubernetes-dashboard.yaml
5.4、生成用戶token
a、創建admin-sa.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: admin annotations: rbac.authorization.kubernetes.io/autoupdate: "true" roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io subjects: kind: ServiceAccount name: admin namespace: kube-system apiVersion: v1 kind: ServiceAccount metadata: name: admin namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile
b、創建admin-sa的pod
# kubectl create -f admin-sa.yaml
# kubectl get secret -n kube-system NAME TYPE DATA AGE admin-token-2tbzp kubernetes.io/service-account-token 3 9m5s attachdetach-controller-token-bmz2c kubernetes.io/service-account-token 3 27h bootstrap-signer-token-6jctj kubernetes.io/service-account-token 3 27h certificate-controller-token-l4l9c kubernetes.io/service-account-token 3 27h
c、生成 admin-token
# kubectl get secret admin-token-2tbzp -o jsonpath={.data.token} -n kube-system|base64 -d eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi0ydGJ6cCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImRlOGU5N2EzLWY1YmItNGRlNC1hN2Q1LTY5YzEwYTIyZTE3OSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbiJ9.NgZvr3XTtrW1XPCJHYRFFdPD1IfsoRRTYJHwAST2gfhY1hva_yIoh1ATSpDO551rNio0ulb7YllSiZMaQViBeFTiAhuuIlKHKyELOoB_eY7jFTCVstdr4vQzH5e2GRQgljEeqbF9Lewr0n_eqIS6pgVQSRT8at-Yk6EXLM0XhYf4qbAvMuztuRTSp8JKmal65gwTxTJU7LpjJM7UbZ8UelVOjNZK8BFCezGv0ccqXywLu5-aAj2NvSHVThg6jybj37R0hszqRw2fkGZtIcEOEtgmij2vHa3oNb3f38gd1eE6WqZpJpVOPLlX6QNSxiV0jaaj9AqodFCdAg48E75Bvg
注意:admin-token的pod名稱
其中 admin-token-2tbzp 是 kubectl get secret -n kube-system 看到的admin-token名稱