CentOS7.9安裝K8S高可用集羣(三主三從)

  服務器規劃見下表(均爲4核4G配置):

   按上表準備好服務器後,對所有服務器節點的操作系統內核由3.10升級至5.4+(haproxy和keepalived需要用到),步驟如下:

#導入用於內核升級的yum源倉庫ELRepo的祕鑰
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org

#啓用ELRepo倉庫yum源
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm

#查看當前可供升級的內核版本
yum --disablerepo="*" --enablerepo="elrepo-kernel" list available

#安裝長期穩定版本的內核(本例中爲5.4.231-1.el7.elrepo)
yum --enablerepo=elrepo-kernel install kernel-lt -y

#設置GRUB的默認啓動內核爲剛剛新安裝的內核版本(先備份,再修改)
cp /etc/default/grub /etc/default/grub.bak
vi /etc/default/grub

  /etc/default/grub文件原來的內容如下:

GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=centos_jumpserver/root rd.lvm.lv=centos_jumpserver/swap rhgb quiet"
GRUB_DISABLE_RECOVERY="true"

將其的 GRUB_DEFAULT=saved 的值由 saved 改爲 0 即可(即 GRUB_DEFAULT=0)保存退出,接着執行以下命令:

#重新生成啓動引導項配置文件,該命令會去讀取/etc/default/grub內容
grub2-mkconfig -o /boot/grub2/grub.cfg

#重啓系統
reboot -h now

  重啓完成後,查看一下當前的操作系統及內核版本:

[root@master1 ~]# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)
[root@master1 ~]# uname -rs
Linux 5.4.231-1.el7.elrepo.x86_64

  更新一下yum源倉庫CentOS-Base.repo的地址,配置阿里雲鏡像地址,用於加速後續某些組件的下載速度:

#先安裝一下wget工具
yum install -y wget

#然後備份 CentOS-Base.repo
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak

#使用阿里雲鏡像倉庫地址重建CentOS-Base.repo
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo

#運行yum makecache生成緩存
yum makecache

   關閉所有服務器的防火牆(在某些組件(例如Dashboard)的防火牆端口開放範圍不是很清楚的情況下選擇將其關閉,生產環境最好按照K8S和相關組件官網的說明,保持防火牆開啓狀態下選擇開放相關端口):

#臨時關閉防火牆
systemctl stop firewalld

#永久關閉防火牆(禁用,避免重啓後又打開)
systemctl disable firewalld

  設置所有服務器的/etc/hosts文件,做好hostname和IP的映射:

cat >> /etc/hosts << EOF
192.168
.17.3 master1 192.168.17.4 master2 192.168.17.5 master3 192.168.17.11 node1 192.168.17.12 node2 192.168.17.13 node3 192.168.17.200 lb # 後面將用於Keepalived的VIP(虛擬IP),如果不是高可用集羣,該IP可以是master1的IP
EOF

  設置所有服務器的時間同步組件(K8S在進行某些集羣狀態的判斷時(例如節點存活)需要有統一的時間進行參考):

#安裝時間同步組件
yum install -y ntpdate

#對齊一下各服務器的當前時間
ntpdate time.windows.com

  禁用selinux(K8S在SELINUX方面的使用和控制還不是非常的成熟):

#臨時禁用Selinux
setenforce 0

#永久禁用Selinux
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

  關閉操作系統swap功能(swap功能可能會使K8S在內存方面的QoS策略失效,進而影響內存資源緊張時的Pod驅逐):

#臨時關閉swap功能
swapoff -a

#永久關閉swap功能
sed -ri 's/.*swap.*/#&/' /etc/fstab

  在每個節點上將橋接的IPv4流量配置爲傳遞到iptables的處理鏈(K8S的Service功能的實現組件kube-proxy需要使用iptables來轉發流量到目標pod):

#新建 /etc/sysctl.d/k8s.conf 配置文件
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF

#新建 /etc/modules-load.d/k8s.conf 配置文件
cat <<EOF | tee /etc/modules-load.d/k8s.conf
br_netfilter
overlay
EOF

#加載配置文件
sysctl --system

#加載br_netfilter模塊(可以使 iptables 規則在 Linux Bridges 上面工作,用於將橋接的流量轉發至iptables鏈)
#如果沒有加載br_netfilter模塊,並不會影響不同node上的pod之間的通信,但是會影響同一node內的pod之間通過service來通信
modprobe br_netfilter

#加載網絡虛擬化技術模塊
modprobe overlay

#檢驗網橋過濾模塊是否加載成功
lsmod | grep -e br_netfilter -e overlay

  K8S的service有基於iptables和基於ipvs兩種代理模型,基於ipvs的性能要高一些,但是需要手動載入ipvs模塊才能使用:

#安裝ipset和ipvsadm
yum install -y ipset ipvsadm

#創建需要加載的模塊寫入腳本文件(高版本內核nf_conntrack_ipv4已經換成nf_conntrack)
cat <<EOF> /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF

#爲上述爲腳本添加執行權限
chmod +x /etc/sysconfig/modules/ipvs.modules

#執行上述腳本
/bin/bash /etc/sysconfig/modules/ipvs.modules

#查看上述腳本中的模塊是否加載成功
lsmod | grep -e -ip_vs -e nf_conntrack

  在master1節點上使用RSA算法生成非對稱加密公鑰和密鑰,並將公鑰傳遞給其他節點(方便後面相同配置文件的分發傳送,以及從master1上免密登錄到其他節點進行集羣服務器管理):

#使用RSA算法成生密鑰和公鑰,遇到輸入,直接Enter即可
ssh-keygen -t rsa

#將公鑰傳送給其他幾個節點(遇到(yes/no)的一路輸入yes,並接着輸入相應服務器的root密碼)
for i in master1 master2 master3 node1 node2 node3;do ssh-copy-id -i .ssh/id_rsa.pub $i;done

Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node3's password:


Number of key(s) added: 1


Now try logging into the machine, with: "ssh 'node3'"
and check to make sure that only the key(s) you wanted were added.

 

  配置所有節點的limits:

#在所有節點上臨時修改limits
ulimit -SHn 65536

#先在master1節點永久修改limits
vi /etc/security/limits.conf
#在最後添加以下內容
* soft nofile 65536
* hard nofile 65536
* soft nproc 4096
* hard nproc 4096
* soft memlock unlimited
* soft memlock unlimited

  然後將master1上的/etc/security/limits.conf文件複製到其他服務器節點上

scp /etc/security/limits.conf root@master2:/etc/security/limits.conf

scp /etc/security/limits.conf root@master3:/etc/security/limits.conf

scp /etc/security/limits.conf root@node3:/etc/security/limits.conf

scp /etc/security/limits.conf root@node2:/etc/security/limits.conf

scp /etc/security/limits.conf root@node1:/etc/security/limits.conf

 

  在所有節點上安裝Docker(K8S v1.24後推薦使用符合CRI標準的containerd或cri-o等作爲容器運行時,原來用於支持Docker作爲容器運行時的dockershim從該版本開始也已經從K8S中移除,如果還要堅持使用Docker的話,需要藉助cri-dockerd適配器,性能沒containerd和cri-o好):

#卸載舊docker相關組件
yum remove -y docker*
yum remove -y containerd*

#下載docker-ce倉庫的yum源配置
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

#安裝特定版本的docker(要符合目標版本K8S的docker版本要求)
yum install -y docker-ce-20.10.2

#配置docker(國內華中科技大學的鏡像倉庫、cgroup驅動爲systemd等)
mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "exec-opts": ["native.cgroupdriver=systemd"],    
  "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn"],    
  "log-driver":"json-file",
  "log-opts": {"max-size":"500m", "max-file":"3"}
}
EOF

#立即啓動Docker,並設置爲開機自動啓動
 systemctl enable docker --now

  設置下載K8S相關組件的倉庫爲國內鏡像yum源(國外的太慢了,這裏設置爲阿里雲的):

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

  在master1、master2和master3節點上安裝kubeadmkubectlkubelet三個工具:

yum install -y kubeadm-1.20.2 kubectl-1.20.2 kubelet-1.20.2 --disableexcludes=kubernetes

  在node1、node2和node3節點上安裝kubeadmkubelet兩個工具:

yum install -y kubeadm-1.20.2  kubelet-1.20.2 --disableexcludes=kubernetes

  在所有節點上立即啓動kubelet,並設置爲開機啓動:

systemctl enable kubelet --now

  在所有節點上將kubelet使用的cgroup drver與前端安裝配置Docker時的對齊,都設置爲systemd :

#先在master1上修改"/etc/sysconfig/kubelet"文件的內容
vi /etc/sysconfig/kubelet

#cgroup-driver使用systemd驅動
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
#kube-proxy基於ipvs的代理模型
KUBE_PROXY_MODE="ipvs"

  然後將master1上的 /etc/sysconfig/kubelet 文件複製到其他節點上

scp /etc/sysconfig/kubelet root@master2:/etc/sysconfig/kubelet

scp /etc/sysconfig/kubelet root@master3:/etc/sysconfig/kubelet
   
scp /etc/sysconfig/kubelet root@node3:/etc/sysconfig/kubelet
    
scp /etc/sysconfig/kubelet root@node2:/etc/sysconfig/kubelet
  
scp /etc/sysconfig/kubelet root@node1:/etc/sysconfig/kubelet

  在所有節點上立即啓動kubelet,並設置爲開機自動啓動

#立即啓動kubelet,並設置爲開機自動啓動
 systemctl enable kubelet --now
 
#查看各節點上kubelet的運行狀態(由於網絡組件還沒安裝,此時kubelet會不斷重啓,屬於正常現象)
 systemctl status kubelet

  在master1、master2和master3節點上安裝HAProxy和Keepalived高可用維護組件(僅高可用集羣需要):

yum install -y keepalived haproxy

  在master1節點上修改HAproxy配置文件/etc/haproxy/haproxy.cfg,修改好後分發到master2和master3:

 #先備份HAProxy配置文件
 cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
 #修改HAProxy配置文件
 vi /etc/haproxy/haproxy.cfg
global
  maxconn  2000
  ulimit-n  16384
  log  127.0.0.1 local0 err
  stats timeout 30s
 
defaults
  log global
  mode  http
  option  httplog
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  timeout http-request 15s
  timeout http-keep-alive 15s
 
frontend monitor-in
  bind *:33305
  mode http
  option httplog
  monitor-uri /monitor
 
listen stats
  bind    *:8006
  mode    http
  stats   enable
  stats   hide-version
  stats   uri       /stats
  stats   refresh   30s
  stats   realm     Haproxy\ Statistics
  stats   auth      admin:admin
 
frontend k8s-master
  bind 0.0.0.0:16443
  bind 127.0.0.1:16443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-master
 
backend k8s-master
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  # 下面的配置根據實際情況修改
  server master1	192.168.17.3:6443  check
  server master2	192.168.17.4:6443  check
  server master3	192.168.17.5:6443  check

  然後將master1節點上的 /etc/haproxy/haproxy.cfg 文件複製到master2和master3節點上

scp /etc/haproxy/haproxy.cfg root@master2:/etc/haproxy/haproxy.cfg

scp /etc/haproxy/haproxy.cfg root@master3:/etc/haproxy/haproxy.cfg

  在master1、master2和master3節點上修改Keepalived的配置文件(注意各節點不一樣的地方,紅色標記):

 #先備份Keepalived的配置文件
 cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
 #修改HAProxy配置文件
 vi /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
global_defs {
    ## 標識本節點的字符串,通常爲 hostname
    router_id master1
    script_user root
    enable_script_security    
}
## 檢測腳本
## keepalived 會定時執行腳本並對腳本執行的結果進行分析,動態調整 vrrp_instance 的優先級。如果腳本執行結果爲 0,並且 weight 配置的值大於 0,則優先級相應的增加。如果腳本執行結果非 0,並且 weight配置的值小於 0,則優先級相應的減少。其他情況,維持原本配置的優先級,即配置文件中 priority 對應的值。
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    # 每2秒檢查一次
    interval 2
    # 一旦腳本執行成功,權重減少5
    weight -5
    fall 3  
    rise 2
}
## 定義虛擬路由,VR_1 爲虛擬路由的標示符,自己定義名稱
vrrp_instance VR_1 {
    ## 主節點爲 MASTER,對應的備份節點爲 BACKUP
    state MASTER
    ## 綁定虛擬 IP 的網絡接口(網卡),與本機 IP 地址所在的網絡接口相同
    interface ens32
    # 主機的IP地址
    mcast_src_ip 192.168.17.3
    # 虛擬路由id
    virtual_router_id 100
    ## 節點優先級,值範圍 0-254,MASTER 要比 BACKUP 高
    priority 100
     ## 優先級高的設置 nopreempt 解決異常恢復後再次搶佔的問題
    nopreempt 
    ## 組播信息發送間隔,所有節點設置必須一樣,默認 1s
    advert_int 2
    ## 設置驗證信息,所有節點必須一致
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    ## 虛擬 IP 池, 所有節點設置必須一樣
    virtual_ipaddress {
    	## VIP,可以定義多個	
        192.168.17.200
    }
    track_script {
       chk_apiserver
    }
}

  在master1節點上新建節點存活監控腳本 /etc/keepalived/check_apiserver.sh :

vi /etc/keepalived/check_apiserver.sh
#!/bin/bash
 
err=0
for k in $(seq 1 5)
do
    check_code=$(pgrep kube-apiserver)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 5
        continue
    else
        err=0
        break
    fi
done
 
if [[ $err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi

  然後將master1節點上的 /etc/keepalived/check_apiserver.sh 文件複製到master2和master3節點上:

scp /etc/keepalived/check_apiserver.sh root@master2:/etc/keepalived/check_apiserver.sh

scp /etc/keepalived/check_apiserver.sh root@master3:/etc/keepalived/check_apiserver.sh

  各節點修改 /etc/keepalived/check_apiserver.sh 文件爲可執行權限:

chmod +x /etc/keepalived/check_apiserver.sh

  各節點立即啓動HAProxy和Keepalived,並設置爲開機啓動:

#立即啓動haproxy,並設置爲開機啓動
systemctl enable haproxy  --now

#立即啓動keepalived,並設置爲開機啓動
systemctl enable keepalived  --now

   測試Keepalived維護的VIP是否暢通:

#在宿主機上(本例爲Windows)
ping 192.168.17.200

正在 Ping 192.168.17.200 具有 32 字節的數據:
來自 192.168.17.200 的回覆: 字節=32 時間=1ms TTL=64
來自 192.168.17.200 的回覆: 字節=32 時間<1ms TTL=64
來自 192.168.17.200 的回覆: 字節=32 時間=1ms TTL=64
來自 192.168.17.200 的回覆: 字節=32 時間<1ms TTL=64

192.168.17.200 的 Ping 統計信息:
    數據包: 已發送 = 4,已接收 = 4,丟失 = 0 (0% 丟失),
往返行程的估計時間(以毫秒爲單位):
    最短 = 0ms,最長 = 1ms,平均 = 0ms

#在K8S各節點上
ping 192.168.17.200 -c 4
PING 192.168.17.200 (192.168.17.200) 56(84) bytes of data.
64 bytes from 192.168.17.200: icmp_seq=1 ttl=64 time=0.058 ms
64 bytes from 192.168.17.200: icmp_seq=2 ttl=64 time=0.055 ms
64 bytes from 192.168.17.200: icmp_seq=3 ttl=64 time=0.077 ms
64 bytes from 192.168.17.200: icmp_seq=4 ttl=64 time=0.064 ms

--- 192.168.17.200 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3112ms
rtt min/avg/max/mdev = 0.055/0.063/0.077/0.011 ms

  

  接下來我們在master1節點生成並配置kubeadm工具用於初始化K8S控制平面(即Master)的配置文件(這裏命名爲kubeadm-config.yaml):

#進入 /etc/kubernetes/ 目錄
cd /etc/kubernetes/

#使用以下命令先將kubeadm默認的控制平臺初始化配置導出來
kubeadm config print init-defaults > kubeadm-config.yaml

  生成的kubeadm-config.yaml文件內容如下(紅色部分爲要注意根據情況修改的藍色爲新加的配置):

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.17.3    #當前節點IP(各節點不同)
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: master1 #當前節點hostname(各節點不同)
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "192.168.17.200:16443"    #Keepalived維護的VIP和HAProxy維護的Port
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers    #K8S內部拉取鏡像時將使用的鏡像倉庫地址
kind: ClusterConfiguration
kubernetesVersion: v1.20.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12   #K8S內部生成Service資源對象的IP時將使用的CIDR格式網段
  podSubnet: 10.244.0.0/16    #K8S內部生成Pod資源對象的IP時將使用的CIDR格式網段,宿主機、Service、Pod三者的網段不能重疊
scheduler: {}

  添加的podSubnet配置項與在命令行中的參與 --pod-network-cidr 是同一個參數,表明使用CNI標準的Pod的網絡插件(後面的Calico就是)

  然後將master1節點上的 /etc/kubernetes/kubeadm-config.yaml 文件複製到master2和master3節點上,並修改上述標紅的(各節點不同)部分

scp /etc/kubernetes/kubeadm-config.yaml root@master2:/etc/kubernetes/kubeadm-config.yaml

scp /etc/kubernetes/kubeadm-config.yaml root@master3:/etc/kubernetes/kubeadm-config.yaml

  所有master節點使用剛剛的控制平臺初始化配置文件 /etc/kubernetes/kubeadm-config.yaml 提前下載好初始化控制平臺需要的鏡像:

kubeadm config images pull --config /etc/kubernetes/kubeadm-config.yaml

[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.20.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.2
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.13-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.7.0

  然後將master1選爲主節點(對應Keepalived的配置),使用配置文件/etc/kubernetes/kubeadm-config.yaml進行K8S控制平面的初始化(會在/etc/kubernetes目錄下生成對應的證書和配置文件):

kubeadm init --config /etc/kubernetes/kubeadm-config.yaml --upload-certs
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.2. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master1] and IPs [10.96.0.1 192.168.17.3 192.168.17.200]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master1] and IPs [192.168.17.3 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master1] and IPs [192.168.17.3 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 17.096734 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
980848047b8e66250ed598f9a135390c2631bb8d5d53a7610ddfe2d1406b1f80
[mark-control-plane] Marking the node master1 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.17.200:16443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:30d16ec922ca217b0be609a0b98e4ffb6399bfda058463560bfffd84cb4c975c \
    --control-plane --certificate-key 980848047b8e66250ed598f9a135390c2631bb8d5d53a7610ddfe2d1406b1f80

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.17.200:16443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:30d16ec922ca217b0be609a0b98e4ffb6399bfda058463560bfffd84cb4c975c 

  如果初始化失敗,可以使用以下命清理一下,然後重新初始化

kubeadm reset -f
ipvsadm --clear
rm -rf ~/.kube

  初始化完master1節點上的k8s控制平面後生成的文件夾和文件內容如下圖所示:

  根據master1上控制平臺初化始成功後的提示,我們繼續完成高可用集羣搭建的剩餘工作。

  爲常規的普通用戶執行以下命令:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

  爲 root 用戶執行以下命令:

export KUBECONFIG=/etc/kubernetes/admin.conf

  使用 "kubectl apply -f [podnetwork].yaml" 爲K8S集羣安裝Pod網絡插件,用於集羣內Pod之間的網絡連接建立(不同節點間的Pod直接用其Pod IP通信,相當於集羣節點對於Pod透明),我們這裏選用符合CNI(容器網絡接口)標準的calico網絡插件(flanel也是可以的):

  首選,從Calico的版本發行說明站點 https://docs.tigera.io/archive 找到兼容你所安裝的K8S的版本,裏面同時也有安裝配置說明,如下截圖所示:

  根據安裝提示,我們執行以下命令進行Calico網絡插件的安裝:

kubectl create -f https://docs.projectcalico.org/archive/v3.21/manifests/tigera-operator.yaml

  如果上述步驟無法直接應用目錄站點上的配置進行安裝,可以直接在瀏覽器打開目標URL,將內容複製下來創建一個yaml文件再應用(例如創建一個 /etc/kubernetes/calico.yaml 文件(其內容就是tigera-operator.yaml文件的內容)):

kubectl create -f /etc/kubernetes/calico.yaml

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章