概述
教你快速搭建 生產環境下的 kubernetes 高可用集羣。
本文通過 HAProxy + Keepalived 實現。
Keepalived: 提供對外服務的虛擬IP(VIP) 是一主多備運行模式,故至少需要兩個 LB 節點。在運行過程中週期檢查本機的 HAProxy 進程狀態,如果檢測到 HAProxy 進程異常,則觸發重新選主的過程,VIP 將飄移到新選出來的主節點,從而實現 VIP 的高可用。
HAProxy: 監聽 Keepalived VIP,運行 Keepalived 和 HAProxy 的節點稱爲 LB(負載均衡)節點。
基礎環境
- Centos 7+
- Kubeadm
- docker 1.13+
- kubernetes 1.15.4
- haproxy
- keepalive
節點配置
主機名 | ip | 系統 | 角色 | 磁盤 | CPU/MEM |
---|---|---|---|---|---|
master1.k8s.com | 192.168.8.181 | Centos7.6 | master | 40G | 4核/4G |
master2.k8s.com | 192.168.8.182 | Centos7.6 | master | 40G | 4核/4G |
master3.k8s.com | 192.168.8.183 | Centos7.6 | master | 40G | 4核/4G |
node1.k8s.com | 192.168.8.191 | Centos7.6 | node | 40G | 4核/4G |
node2.k8s.com | 192.168.8.192 | Centos7.6 | node | 40G | 4核/4G |
VIP | 192.168.8.10 | – | – | – | – |
環境
- kubernetes v1.14.0
- kebeadm
- haproxy-k8s
- keepalived-k8s
- docker 1.13.1
準備工作(所有節點)
1. 安裝docker
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sudo yum makecache fast
sudo yum -y install docker-ce
systemctl enable docker
systemctl start docker
2. 關閉 swap分區 、防火牆、selinux等
2.1 臨時關閉swap 分區
swapoff -a
2.2 關閉防火牆
systemctl stop firewalld
systemctl disable firewalld
2.3 臨時關閉selinux
setenforce 0
3. 準備下載源
wget https://raw.githubusercontent.com/xiliangMa/xiliangMa.github.io/master/kubernetes/k8s.repo -P /etc/yum.repos.d/
4. k8s.conf
wget https://raw.githubusercontent.com/xiliangMa/xiliangMa.github.io/master/kubernetes/k8s.conf -P /etc/sysctl.d/
5. 安裝kubeadm kubelet kubectl
yum install kubelet-1.15.4 kubeadm-1.15.4 kubectl-1.15.4 -y
systemctl enable kubelet
systemctl start kubelet
6. 下載鏡像
6.1
wget https://raw.githubusercontent.com/xiliangMa/xiliangMa.github.io/master/kubernetes/install/1.15.4/pull.sh
chmod +x pull.sh
./pull.sh
6.2 查看鏡像
docker images | grep k8s.gcr.io
結果如下:
k8s.gcr.io/kube-proxy v1.15.4 171a8a0f4d0b 3 weeks ago 82.4 MB
k8s.gcr.io/kube-apiserver v1.15.4 8d42b9dd0d2f 3 weeks ago 207 MB
k8s.gcr.io/kube-controller-manager v1.15.4 6bd2df93e08c 3 weeks ago 159 MB
k8s.gcr.io/kube-scheduler v1.15.4 40eada7a21a8 3 weeks ago 81.1 MB
k8s.gcr.io/coredns 1.3.1 eb516548c180 9 months ago 40.3 MB
k8s.gcr.io/etcd 3.3.10 2c4adeb21b4f 10 months ago 258 MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 22 months ago 742 kB
ipvs 設置 (所有節點)
- 安裝 ipset ipvsadm
yum install install -y ipset ipvsadm
- 配置加載 ipvs 模塊
這裏開啓是一次性的重啓後失效。
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
- 檢查是否配置成功
[root@master ~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_sh 12688 0
ip_vs_wrr 12697 0
ip_vs_rr 12600 0
ip_vs 145497 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack_ipv4 15053 3
nf_defrag_ipv4 12729 1 nf_conntrack_ipv4
nf_conntrack 133095 6 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_ipv4
libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack
配置 HAProxy (所有 Master 節點)
- 準備 haproxy-start.sh
#!/bin/bash
# -----------------修改 Master 地址
MasterIP1=192.168.8.181
MasterIP2=192.168.8.182
MasterIP3=192.168.141.183
# ----------------- kube-apiserver 默認端口 6443 不需要修改
MasterPort=6443
HaproxyPort=6444
# 啓動
docker run -d --restart=always --name=HAProxy -p $HaproxyPort:$HaproxyPort \
-e MasterIP1=$MasterIP1 \
-e MasterIP2=$MasterIP2 \
-e MasterIP3=$MasterIP3 \
-e MasterPort=$MasterPort \
wise2c/haproxy-k8s
- 添加權限
chmod +x haproxy-start.sh
- 初始化 haproxy
初始化
./haproxy-start.sh
查看:
[root@master1 k8s]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4024d285442c wise2c/haproxy-k8s "/docker-entrypoin..." About a minute ago Up About a minute 0.0.0.0:6444->6444/tcp HAProxy
配置 Keepalived (所有 Master 節點)
- 準備 keepalived-start.sh 腳本
#!/bin/bash
# ----------------- 修改虛擬 IP 地址
VIRTUAL_IP=192.168.8.10
# ----------------- 網卡名
INTERFACE=ens33
# ----------------- 子網掩碼
NETMASK_BIT=24
# ----------------- HAProxy 暴露端口,內部指向 kube-apiserver 的 6443 端口
CHECK_PORT=6444
# ----------------- 路由標識符
RID=10
# ----------------- 虛擬路由標識符
VRID=160
# ----------------- IPV4 多播地址,默認 224.0.0.18
MCAST_GROUP=224.0.0.18
docker run -itd --restart=always --name=Keepalived \
--net=host --cap-add=NET_ADMIN \
-e VIRTUAL_IP=$VIRTUAL_IP \
-e INTERFACE=$INTERFACE \
-e CHECK_PORT=$CHECK_PORT \
-e RID=$RID \
-e VRID=$VRID \
-e NETMASK_BIT=$NETMASK_BIT \
-e MCAST_GROUP=$MCAST_GROUP \
wise2c/keepalived-k8s
- 添加權限
chmod +x keepalived-start.sh
- 初始化 keepalived
啓動:
./keepalived-start.sh
查看:
[root@master1 k8s]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bd211be2184f wise2c/keepalived-k8s "/usr/bin/keepaliv..." 2 seconds ago Up 2 seconds Keepalived
946700915a01 wise2c/haproxy-k8s "/docker-entrypoin..." 6 seconds ago Up 6 seconds 0.0.0.0:6444->6444/tcp HAProxy
- 查看 VIP 是否綁定成功
網卡根據自己指定的查看:
[root@master1 k8s]# ip a| grep ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
inet 192.168.8.181/24 brd 192.168.8.255 scope global noprefixroute ens33
inet 192.168.8.10/24 scope global secondary ens33
初始化 Master 節點(Mster1 節點操作)
- 通過kubeadm 導出 默認配置
kubeadm config print init-defaults --kubeconfig ClusterConfiguration > kubeadm.yml
- 修改配置
按照註釋部分修改即可
apiVersion: kubeadm.k8s.io/v1beta1
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
# -------- 修改爲 當前Master 節點ip
advertiseAddress: 192.168.8.181
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
# 當前master節點
name: master1.k8s.com
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
# -------- 修改爲 VIP
controlPlaneEndpoint: "192.168.8.10:6444"
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
# 這裏的倉庫地址可以不修改,按照前面的文章已經可以下載k8s相關鏡像
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.14.0
networking:
dnsDomain: cluster.local
# -------- 這裏使用的是flannel 網絡,默認網段
podSubnet: 10.244.0.0/16
scheduler: {}
---
# -------- 開啓 ipvs
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
SupportIPVSProxyMode: true
mode: ipvs
- 初始化集羣
kubeadm init --config=kubeadm.yml --upload-certs
結果如下:
。。。。省略。。。。
。。。。。。。。。。
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join 192.168.8.10:6444 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:08cfe73c0d333ccdc9b94f8cf2795809b5308b1805413f332929cb0854d94c4e \
--experimental-control-plane --certificate-key a5dd02d91627bc2218b2cc3ffbee3406571e01543dd96d9c7f1202a96f41e052
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --experimental-upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.8.10:6444 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:08cfe73c0d333ccdc9b94f8cf2795809b5308b1805413f332929cb0854d94c4e
如果出錯了重置即可
kubeadm reset
- 配置 kubectl
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
- 查看節點
[root@master1 k8s]# kubectl get node
NAME STATUS ROLES AGE VERSION
master1.k8s.com NotReady master 9m25s v1.14.0
- 安裝 flannel 網絡插件
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
加入 Master 節點(Master2 和 Master3)
kubeadm join 192.168.8.10:6444 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:893c45bed3210b8b787084cb6467feb93235cb6765cbd186124c4c4e73c9b3bc \
--experimental-control-plane --certificate-key b11dd02f03926a70ba7632607a386adfc89948436a0d67927d447c2c12824c4b
加入集羣后, 配置config:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
加入Node 節點 (所有的 Node 節點)
kubeadm join 192.168.8.10:6444 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:893c45bed3210b8b787084cb6467feb93235cb6765cbd186124c4c4e73c9b3bc
測試集羣
查看主機
[root@master1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master1.k8s.com Ready master 45m v1.14.0
master1.k8s.com Ready master 55m v1.14.0
master1.k8s.com Ready master 60m v1.14.0
node1.k8s.com Ready <none> 70m v1.14.0
node2.k8s.com Ready <none> 80m v1.14.0
測試集羣高可用:
reboot Master節點 或者 重啓 HAProxy 容器這樣會導致VIP 漂移到其他的Master 從而達到高可用的作用
重啓Master1:
service docker restart
查看 Master2 或者 Master3 的網卡是否已經已經有虛擬網卡的信息
master2 已經成爲VIP所在的節點:
[root@master2 ~]# ip a | grep ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
inet 192.168.8.182/24 brd 192.168.8.255 scope global noprefixroute ens33
inet 192.168.8.10/24 scope global secondary ens33