新版的K8S 1.16版本在穩定性和可用性方面有了較大的提升,特別是支持後端PV的擴容API接口已經更新爲beta版本,在使用有狀態的數據存儲POD時管理會更加方便,也更符合生產需求。 下面新版K8S 1.16.2 快速部署說明。
配置信息
- 主機列表:
主機名 | IP |
---|---|
k8s-master | 192.168.20.70 |
k8s-worker-1 | 192.168.20.71 |
k8s-worker-2 | 192.168.20.72 |
- 組件版本信息
系統組件 | 版本 |
---|---|
CentOS7 | 內核4.4.178 |
docker | 18.09.5 |
k8s | 1.16.2 |
所有節點系統初始化
- 關閉防火牆,selinux,升級內核到4.4版本以上,設置解析主機名,同步集羣時間。
- 配置系統參數:
# cat /etc/sysctl.d/k8s.conf
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_port_range = 10000 65000
fs.file-max = 2000000
net.ipv4.ip_forward = 1
vm.swappiness = 0
- 安裝docker
# ll /tmp/
total 33512
-rw-r--r-- 1 root root 19623520 Apr 18 2019 docker-ce-18.09.5-3.el7.x86_64.rpm
-rw-r--r-- 1 root root 14689524 Apr 18 2019 docker-ce-cli-18.09.5-3.el7.x86_64.rpm
mv docker-ce.repo /etc/yum.repos.d/
yum install docker-ce-*
- 配置docker:
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
mkdir -p /etc/systemd/system/docker.service.d
# Restart Docker
systemctl daemon-reload
systemctl restart docker
- 各個節點的iptables設置爲 ‘legacy’模式,或者編譯安裝1.8.0以上版本:
update-alternatives --set iptables /usr/sbin/iptables-legacy
配置鏡像
1.配置國內鏡像源:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
- 安裝對應的組件:
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet
- 修改kubelet的默認驅動
echo "KUBELET_EXTRA_ARGS=--cgroup-driver=systemd" > /etc/sysconfig/kubelet
安裝k8s Master
Master節點初始化配置
以下操作在Master上執行。
- 執行如下腳本,下載鏡像:
#!/bin/bash
images=(kube-apiserver-amd64:v1.16.2 kube-controller-manager-amd64:v1.16.2 kube-scheduler-amd64:v1.16.2 kube-proxy-amd64:v1.16.2 pause-amd64:3.1 coredns-amd64:1.6.2 etcd:3.3.15-0 )
for image in ${images[@]} ; do
imageName=`echo $image |sed 's/-amd64//g'`
docker pull mirrorgooglecontainers/$image
docker tag mirrorgooglecontainers/$image k8s.gcr.io/$imageName
docker rmi mirrorgooglecontainers/$image
done
- 啓動master節點,使用flannel網絡配置:
kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=ImagePull
- 執行成功後,所有master節點上的服務都已啓動成功:
[root@k8s-master ~]# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 6175/sshd
tcp 0 0 127.0.0.1:21925 0.0.0.0:* LISTEN 6188/containerd
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 8681/kubelet
tcp 0 0 127.0.0.1:19944 0.0.0.0:* LISTEN 8681/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 9253/kube-proxy
tcp 0 0 192.168.20.70:2379 0.0.0.0:* LISTEN 9053/etcd
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 9053/etcd
tcp 0 0 192.168.20.70:2380 0.0.0.0:* LISTEN 9053/etcd
tcp 0 0 127.0.0.1:2381 0.0.0.0:* LISTEN 9053/etcd
tcp 0 0 127.0.0.1:10257 0.0.0.0:* LISTEN 9006/kube-controlle
tcp 0 0 127.0.0.1:10259 0.0.0.0:* LISTEN 8989/kube-scheduler
tcp6 0 0 :::22 :::* LISTEN 6175/sshd
tcp6 0 0 :::10250 :::* LISTEN 8681/kubelet
tcp6 0 0 :::10251 :::* LISTEN 8989/kube-scheduler
tcp6 0 0 :::6443 :::* LISTEN 9098/kube-apiserver
tcp6 0 0 :::10252 :::* LISTEN 9006/kube-controlle
tcp6 0 0 :::10256 :::* LISTEN 9253/kube-proxy
- 配置kubectl
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
在安裝網絡插件前,master節點狀態還處於NotReady狀態:
# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 16m v1.16.2
- 安裝flannel網絡插件:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
此時節點信息已經處於Ready:
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 25m v1.16.2
添加節點
下載所需的鏡像
node節點下載對應的鏡像:
docker pull registry.cn-hangzhou.aliyuncs.com/openthings/k8s-gcr-io-pause:3.1
docker pull registry.cn-hangzhou.aliyuncs.com/openthings/k8s-gcr-io-coredns:1.6.2
docker tag registry.cn-hangzhou.aliyuncs.com/openthings/k8s-gcr-io-coredns:1.6.2 k8s.gcr.io/coredns:1.6.2
docker tag registry.cn-hangzhou.aliyuncs.com/openthings/k8s-gcr-io-pause:3.1 k8s.gcr.io/pause:3.1
添加節點主機
- 在master獲取token, token信息在執行kubeadm init 初始化的時候也會顯示:
[root@k8s-master ~]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
orof8e.2u2qtt10j4p4lnx9 20h 2019-10-25T16:28:28+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
- 獲取對應的hash值:
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //'
提示:如果執行join命令時提示token過期,按照提示在Master 上執行kubeadm token create 生成一個新的token。
- 在需要添加的worker節點上執行添加操作填寫對應的token和hash值:
kubeadm join 192.168.20.70:6443 --token orof8e.2u2qtt10j4p4lnx9 --discovery-token-ca-cert-hash sha256:c752a1110d36d9bda79672d0d31425dfe113b9691bf3d2dc7123ac36b271e858
通過上面的同一條命令,在多個節點上執行,可以添加多個node節點。
- 添加成功後,在master節點上可以查看node狀態:
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 3h45m v1.16.2
k8s-worker-1 Ready <none> 27s v1.16.2
- 對worker節點添加ROLES標識:
kubectl label node k8s-worker-1 node-role.kubernetes.io/worker=worker
kubectl label node k8s-worker-2 node-role.kubernetes.io/worker=worker
查看節點信息:
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 3h52m v1.16.2
k8s-worker-1 Ready worker 7m19s v1.16.2
k8s-worker-2 Ready worker 4m48s v1.16.2
自定義集羣配置
安裝Metric-server
1.下載metric-server的yaml文件:https://github.com/AndySkyL/k8s/tree/master/k8s_deploy/k8s-1.16/kubeadm-deploy/metric-server
- 到當前的目錄中執行yaml:
kubectl create -f ./
- pod正常啓動後檢查apiservices和top命令是否正常:
# kubectl get apiservices |grep 'metrics'
v1beta1.metrics.k8s.io kube-system/metrics-server True 20m
# kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-master 250m 12% 1230Mi 48%
k8s-worker-1 80m 4% 575Mi 14%
k8s-worker-2 77m 3% 524Mi 13%
安裝Dashboard
==K8S 1.16需要使用 Dashboard V2 版本,默認使用Metric-server,使用V1版本會報錯 ==
- 由於dashboard使用的CA證書默認集羣中並沒有創建,引用的部署dashboard時,secrets並沒有具體加載的證書和密鑰,會造成登錄不成功,瀏覽器無法識別請求的情況。 所以這裏先創建CA證書:
# 創建存儲證書的目錄
mkdir key && cd key
# 創建Dashboard 的namespace
kubectl create namespace kubernetes-dashboard
# 生成key
openssl genrsa -out dashboard.key 2048
# 生成證書請求
openssl req -new -key dashboard.key -out dashboard.csr -days 3650 -subj "/O=k8s/CN=dashboard"
# 生成自簽證書
openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
- 這裏將創建3個文件:
[root@k8s-master key]# ll
total 12
-rw-r--r-- 1 root root 993 Oct 25 14:23 dashboard.crt
-rw-r--r-- 1 root root 899 Oct 25 14:23 dashboard.csr
-rw-r--r-- 1 root root 1679 Oct 25 14:21 dashboard.key
- 創建使用自簽證書的secret
kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key \
--from-file=dashboard.crt -n kubernetes-dashboard
- 部署dashboard,先修改暴露外部的NodePort端口:
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta5/aio/deploy/recommended.yaml
# 在service中修改如下信息:
...
spec:
ports:
- nodePort: 30727
port: 443
protocol: TCP
targetPort: 8443
...
# 部署dashboard
kubectl create -f recommended.yaml
- 創建admin權限的管理員賬號
# cat admin-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
- 使用ip + NodePort端口直接登錄
- 獲取admin的token:
kubectl describe secret -n kubernetes-dashboard $(kubectl -n kubernetes-dashboard get secret|grep admin|awk '{print $1}')
啓用ipvs模式
在不安裝ipvs的情況下,集羣會使用iptables的方式,這裏需要對配置進行重新自定義。
- IPVS和舊版iptables的比較和聯繫參考:https://kubernetes.io/zh/blog/2018/07/09/ipvs-based-in-cluster-load-balancing-deep-dive/
- 不同方式使用配置ipvs說明: https://github.com/kubernetes/kubernetes/tree/master/pkg/proxy/ipvs
- 安裝ipvs的安裝包:
yum install -y ipvsadm ipset conntrack
- 修改默認的configmap 中kube-proxy的模式爲ipvs:
kubectl get cm kube-proxy -n kube-system -o yaml | sed 's/mode: ""/mode: "ipvs"/' | kubectl apply -f -
- 刪除舊的kube-proxy容器,自動創建新的ipvs模式容器:
for i in $(kubectl get po -n kube-system | awk '/kube-proxy/ {print $1}'); do
kubectl delete po $i -n kube-system
done
問題彙總
Metric-Server 無法獲取宿主主機狀態
如果安裝metric-server 啓動後apiservices正常,但是pod日誌報如下錯誤(無法解析宿主機的主機名,找不到主機):
unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_su1: unable to fetch metrics from Kubelet k8s-worker-1 (k8s-worker-1): Get https://k8s-worker-1:10250/stats/summary?only_cpu_and_memorylookup k8s-worker-1 on 10.96.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:k8s-worker-2: unable trom Kubelet k8s-worker-2 (k8s-worker-2): Get https://k8s-worker-2:10250/stats/summary?only_cpu_and_memory=true: dial tcp: lookup k8s-.0.10:53: no such host,
解決方式:
確認metric-server的deployment中開啓了--kubelet-preferred-address-types=InternalIP 參數:
image: mirrorgooglecontainers/metrics-server-amd64:v0.3.6
command:
- /metrics-server
- --metric-resolution=30s
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP