Kubernetes(k8s)認識以及應用(三)——二進制部署Kubernetes

二進制部署Kubernetes

部署環境

node1:192.168.11.25

node2:192.168.11.26

node3:192.168.11.27

一、準備工作

1.修改主機名(三臺機子分別修改主機名)

hostnamectl set-hostname node1

hostnamectl set-hostname node2

hostnamectl set-hostname node3

2.修改host文件,添加主機名和 IP 的對應關係

vim /etc/hosts

192.168.11.25 node1    node1
192.168.11.26 node2    node2
192.168.11.27 node3    node3

3.添加 k8s 和 docker 賬戶

useradd -m k8s

爲k8s賬戶設置密碼

sh -c 'echo 123456 | passwd k8s --stdin'

gpasswd -a k8s wheel

查看:visudogrep '%wheel.*NOPASSWD: ALL' /etc/sudoers

4.在每臺機器上添加 docker 賬戶,將 k8s 賬戶添加到 docker 組中,同時配置 dockerd 參數

useradd -m docker

gpasswd -a k8s docker

mkdir -p /etc/docker/

vim /etc/docker/daemon.json

{
    "registry-mirrors": ["https://hub-mirror.c.163.com", "https://docker.mirrors.ustc.edu.cn"],
    "max-concurrent-downloads": 20
}

5.無密碼 ssh 登錄其它節點

該操作均在node1節點上執行,然後遠程分發文件和執行命令

設置node1節點可以無密碼登錄所有節點的 k8s 和 root 賬戶

ssh-keygen -t rsa

ssh-copy-id root@node1
ssh-copy-id root@node2
ssh-copy-id root@node3

ssh-copy-id k8s@node1
ssh-copy-id k8s@node2
ssh-copy-id k8s@node3

6.將可執行文件路徑 /opt/k8s/bin 添加到 PATH 變量中(每個節點上執行)

sh -c "echo 'PATH=/opt/k8s/bin:$PATH:$HOME/bin:$JAVA_HOME/bin' >>/root/.bashrc"

echo 'PATH=/opt/k8s/bin:$PATH:$HOME/bin:$JAVA_HOME/bin' >>~/.bashrc

7.安裝依賴包(每個節點上執行)

yum install -y epel-release conntrack ipvsadm ipset jq sysstat curl iptables libseccomp

8.關閉防火牆

systemctl stop firewalld && systemctl disable firewalld
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat
iptables -P FORWARD ACCEPT

9.關閉 swap 分區

swapoff -a

下次啓動自動關閉

sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

10.關閉selinux

setenforce 0

下次啓動自動關閉

sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

11.關閉 dnsmasq

service dnsmasq stop

systemctl disable dnsmasq

12.設置系統參數

vim /etc/sysctl.d/kubernetes.conf

et.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100

sysctl -p /etc/sysctl.d/kubernetes.conf

mount -t cgroup -o cpu,cpuacct none /sys/fs/cgroup/cpu,cpuacct

13.加載內核模塊

vim /etc/sysconfig/modules/ipvs.modules

    #!/bin/bash
    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh
    modprobe -- nf_conntrack_ipv4

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

14.設置系統時區

timedatectl set-timezone Asia/Shanghai
timedatectl set-local-rtc 0
systemctl restart rsyslog
systemctl restart crond

15.創建所需要的存儲目錄

mkdir -p /opt/k8s/bin
chown -R k8s /opt/k8s
mkdir -p /etc/kubernetes/cert
chown -R k8s /etc/kubernetes
mkdir -p /etc/etcd/cert
chown -R k8s /etc/etcd/cert
mkdir -p /var/lib/etcd && chown -R k8s /etc/etcd/cert

16.檢查系統內核和模塊是否適合運行docker(僅適用於 linux 系統)

curl https://raw.githubusercontent.com/docker/docker/master/contrib/check-config.sh > check-config.sh

bash ./check-config.sh

注意這裏需要把raw.githubusercontent.com的網址加入到host文件,或者科學上網方式

echo "199.232.68.133 raw.githubusercontent.com" >> /etc/hosts

17.配置集羣環境變量

在後面的部署中都需要使用這些全局變量,根據自己的網絡環境修改

vim path.sh

#!/usr/bin/bash
# 生成 EncryptionConfig 所需的加密 key
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
# 最好使用 當前未用的網段 來定義服務網段和 Pod 網段
# 服務網段,部署前路由不可達,部署後集羣內路由可達(kube-proxy 和 ipvs 保證)
SERVICE_CIDR="10.254.0.0/16"
# Pod 網段,建議 /16 段地址,部署前路由不可達,部署後集羣內路由可達(flanneld 保證)
CLUSTER_CIDR="172.30.0.0/16"
# 服務端口範圍 (NodePort Range)
export NODE_PORT_RANGE="8400-9000"
# 集羣各機器 IP 數組
export NODE_IPS=(192.168.11.25 192.168.11.26 192.168.11.27)
# 集羣各 IP 對應的 主機名數組
export NODE_NAMES=(node1 node2 node3)
# kube-apiserver 的 VIP(HA 組件 keepalived 發佈的 IP)
export MASTER_VIP=192.168.11.24
# kube-apiserver VIP 地址(HA 組件 haproxy 監聽 8443 端口)
export KUBE_APISERVER="https://${MASTER_VIP}:8443"
# HA 節點,VIP 所在的網絡接口名稱
export VIP_IF="eno16777736"
# etcd 集羣服務地址列表
export ETCD_ENDPOINTS="https://192.168.11.25:2379,https://192.168.11.26:2379,https://192.168.11.27:2379"
# etcd 集羣間通信的 IP 和端口
export ETCD_NODES="node1=https://192.168.11.25:2380,node2=https://192.168.11.26:2380,node3=https://192.168.11.27:2380"
# flanneld 網絡配置前綴
export FLANNEL_ETCD_PREFIX="/kubernetes/network"
# kubernetes 服務 IP (一般是 SERVICE_CIDR 中第一個IP)
export CLUSTER_KUBERNETES_SVC_IP="10.254.0.1"
# 集羣 DNS 服務 IP (從 SERVICE_CIDR 中預分配)
export CLUSTER_DNS_SVC_IP="10.254.0.2"
# 集羣 DNS 域名
export CLUSTER_DNS_DOMAIN="cluster.local."
# 將二進制目錄 /opt/k8s/bin 加到 PATH 中
export PATH=/opt/k8s/bin:$PATH

source path.sh

使用下面的腳本把配置環境變量的腳本發送到其他node節點

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp path.sh k8s@${node_ip}:/opt/k8s/bin/
    ssh k8s@${node_ip} "chmod +x /opt/k8s/bin/*"
  done

二、創建 CA 證書和祕鑰

使用cloudFlare的PKI工具集cfssl創建所有證書

1.安裝 cfssl 工具集

mkdir -p /opt/k8s/cert && chown -R k8s /opt/k8s && cd /opt/k8s
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
mv cfssl_linux-amd64 /opt/k8s/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
mv cfssljson_linux-amd64 /opt/k8s/bin/cfssljson
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /opt/k8s/bin/cfssl-certinfo
chmod +x /opt/k8s/bin/*
export PATH=/opt/k8s/bin:$PATH

2.創建根證書 (CA)

CA 證書是集羣所有節點共享的,只需要創建一個 CA 證書,後續創建的所有證書都由它簽名

cd /opt/k8s/bin

3.創建配置文件

vim ca-config.json

{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}

4.創建證書籤名請求文件

vim ca-csr.json

{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "4Paradigm"
    }
  ]
}

5.生成 CA 證書和私鑰

到/opt/k8s/bin目錄下執行

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

查看生成的證書:ls ca*

6.分發證書文件

將生成的 CA 證書、祕鑰文件、配置文件拷貝到所有節點的/etc/kubernetes/cert目錄下

for node_ip in ${NODE_IPS[@]}                                                      
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p /etc/kubernetes/cert && chown -R k8s /etc/kubernetes"
    scp ca*.pem ca-config.json k8s@${node_ip}:/etc/kubernetes/cert
  done

source scp2.sh

三、部署 kubectl 命令行工具

1.下載和分發 kubectl 二進制文件

wget https://dl.k8s.io/v1.10.4/kubernetes-client-linux-amd64.tar.gz

tar -zxf kubernetes-client-linux-amd64.tar.gz

分發kubectl文件

for node_ip in ${NODE_IPS[@]}                                                      
  do
    echo ">>> ${node_ip}"
    scp kubernetes/client/bin/kubectl k8s@${node_ip}:/opt/k8s/bin/
    ssh k8s@${node_ip} "chmod +x /opt/k8s/bin/*"
  done

2.創建 admin 證書和私鑰

kubectl 與 apiserver https 安全端口通信,apiserver 對提供的證書進行認證和授權

kubectl 作爲集羣的管理工具,需要被授予最高權限。這裏創建具有最高權限的 admin 證書

vim admin-csr.json

{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "4Paradigm"
    }
  ]
}

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem -ca-key=/etc/kubernetes/cert/ca-key.pem -config=/etc/kubernetes/cert/ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

查看證書和私鑰:ls admin*

3.創建 kubeconfig 文件

# 設置集羣參數
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/cert/ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=kubectl.kubeconfig
# 設置客戶端認證參數
kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kubectl.kubeconfig
# 設置上下文參數
kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kubectl.kubeconfig
# 設置默認上下文
kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig

4.分發 kubeconfig 文件

for node_ip in ${NODE_IPS[@]}                                                      
  do
    echo ">>> ${node_ip}"
    ssh k8s@${node_ip} "mkdir -p ~/.kube"
    scp kubectl.kubeconfig k8s@${node_ip}:~/.kube/config
    ssh root@${node_ip} "mkdir -p ~/.kube"
    scp kubectl.kubeconfig root@${node_ip}:~/.kube/config
  done

四、部署 etcd 集羣

1.下載和分發 etcd 二進制文件

下載最新版本的發佈包

wget https://github.com/coreos/etcd/releases/download/v3.3.7/etcd-v3.3.7-linux-amd64.tar.gz

tar zxf etcd-v3.3.7-linux-amd64.tar.gz

分發二進制文件到集羣所有節點

for node_ip in ${NODE_IPS[@]}                                                      
  do
    echo ">>> ${node_ip}"
    scp etcd-v3.3.7-linux-amd64/etcd* k8s@${node_ip}:/opt/k8s/bin
    ssh k8s@${node_ip} "chmod +x /opt/k8s/bin/*"
  done

2.創建證書籤名請求

vim etcd-csr.json

{                                                                                  
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "192.168.11.25",
    "192.168.11.26",
    "192.168.11.27"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "4Paradigm"
    }
  ]
}

生成證書和私鑰

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem -ca-key=/etc/kubernetes/cert/ca-key.pem -config=/etc/kubernetes/cert/ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

查看生成的證書和私鑰:ls etcd*

分發生成的證書和私鑰

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p /etc/etcd/cert && chown -R k8s /etc/etcd/cert"
    scp etcd*.pem k8s@${node_ip}:/etc/etcd/cert/
  done

3.創建 etcd 的 systemd unit 模板文件

vim etcd.service.template

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
User=k8s
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/opt/k8s/bin/etcd \
  --data-dir=/var/lib/etcd \
  --name=##NODE_NAME## \
  --cert-file=/etc/etcd/cert/etcd.pem \
  --key-file=/etc/etcd/cert/etcd-key.pem \
  --trusted-ca-file=/etc/kubernetes/cert/ca.pem \
  --peer-cert-file=/etc/etcd/cert/etcd.pem \
  --peer-key-file=/etc/etcd/cert/etcd-key.pem \
  --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \
  --peer-client-cert-auth \
  --client-cert-auth \
  --listen-peer-urls=https://##NODE_IP##:2380 \
  --initial-advertise-peer-urls=https://##NODE_IP##:2380 \
  --listen-client-urls=https://##NODE_IP##:2379,https://127.0.0.1:2379 \
  --advertise-client-urls=https://##NODE_IP##:2379 \
  --initial-cluster-token=etcd-cluster-0 \
  --initial-cluster=${ETCD_NODES} \
  --initial-cluster-state=new
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

爲各節點創建和分發 etcd systemd unit 文件

for (( i=0; i < 3; i++ ))
  do
    sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" etcd.service.template > etcd-${NODE_IPS[i]}.service 
  done

查看創建的文件:ls *.service

分發生成的 systemd unit 文件

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p /var/lib/etcd && chown -R k8s /var/lib/etcd" 
    scp etcd-${node_ip}.service root@${node_ip}:/etc/systemd/system/etcd.service
  done

4.使用ssh遠程啓動etcd服務

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd &"
  done

5.檢查服務啓動的狀況

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh k8s@${node_ip} "systemctl status etcd|grep Active"
  done

狀態爲running纔是成功啓動了

如果服務不是活動狀態就再運行一次啓動服務的腳本可能就啓動了,還是不行則查看日誌文件:journalctl -u etcd

6.驗證服務狀態

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ETCDCTL_API=3 /opt/k8s/bin/etcdctl \
    --endpoints=https://${node_ip}:2379 \
    --cacert=/etc/kubernetes/cert/ca.pem \
    --cert=/etc/etcd/cert/etcd.pem \
    --key=/etc/etcd/cert/etcd-key.pem endpoint health;
  done

如下圖則羣集爲健康狀態

五、部署 flannel 網絡

1.下載和分發 flanneld 二進制文件

mkdir flannel

wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz

tar -zxf flannel-v0.10.0-linux-amd64.tar.gz -C flannel

分發flanneld文件

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp  flannel/{flanneld,mk-docker-opts.sh} k8s@${node_ip}:/opt/k8s/bin/
    ssh k8s@${node_ip} "chmod +x /opt/k8s/bin/*"
  done

2.創建 flannel 證書和私鑰

vim flanneld-csr.json

{
  "CN": "flanneld",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "4Paradigm"
    }
  ]
}

生成證書和私鑰

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
  -ca-key=/etc/kubernetes/cert/ca-key.pem \
  -config=/etc/kubernetes/cert/ca-config.json \
  -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld

查看生成的證書和私鑰:ls flanneld*pem

將生成的證書和私鑰分發到所有節點

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p /etc/flanneld/cert && chown -R k8s /etc/flanneld"
    scp flanneld*.pem k8s@${node_ip}:/etc/flanneld/cert
  done

3.向etcd寫入集羣Pod網段信息

etcdctl \
  --endpoints=${ETCD_ENDPOINTS} \
  --ca-file=/etc/kubernetes/cert/ca.pem \
  --cert-file=/etc/flanneld/cert/flanneld.pem \
  --key-file=/etc/flanneld/cert/flanneld-key.pem \
  set ${FLANNEL_ETCD_PREFIX}/config '{"Network":"'${CLUSTER_CIDR}'", "SubnetLen": 24, "Backend": {"Type": "vxlan"}}'

4.創建flanneld 的 systemd unit文件

export IFACE=eno16777736

vim flanneld.service

[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
ExecStart=/opt/k8s/bin/flanneld \
  -etcd-cafile=/etc/kubernetes/cert/ca.pem \
  -etcd-certfile=/etc/flanneld/cert/flanneld.pem \
  -etcd-keyfile=/etc/flanneld/cert/flanneld-key.pem \
  -etcd-endpoints=https://192.168.11.25:2379,https://192.168.11.26:2379,https://192.168.11.27:2379 \
  -etcd-prefix=/kubernetes/network \
  -iface=eno16777736
ExecStartPost=/opt/k8s/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service

5.分發flanneld systemd unit文件到所有節點

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp flanneld.service root@${node_ip}:/etc/systemd/system/
  done

6.啓動flanneld服務

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh k8s@${node_ip} "systemctl status flanneld|grep Active"
  done

狀態爲running纔是成功啓動了

如果服務不是活動狀態就再運行一次啓動服務的腳本可能就啓動了,還是不行則查看日誌文件:journalctl -u flanneld

7.檢查分配給各flanneld的Pod網段信息

etcdctl \
  --endpoints=${ETCD_ENDPOINTS} \
  --ca-file=/etc/kubernetes/cert/ca.pem \
  --cert-file=/etc/flanneld/cert/flanneld.pem \
  --key-file=/etc/flanneld/cert/flanneld-key.pem \
  get ${FLANNEL_ETCD_PREFIX}/config

8.查看已分配的Pod子網段列表(/24)

etcdctl \
  --endpoints=${ETCD_ENDPOINTS} \
  --ca-file=/etc/kubernetes/cert/ca.pem \
  --cert-file=/etc/flanneld/cert/flanneld.pem \
  --key-file=/etc/flanneld/cert/flanneld-key.pem \
  ls ${FLANNEL_ETCD_PREFIX}/subnets

如下圖所示

9.查看某一 Pod網段對應的節點IP和flannel接口地址

etcdctl \
  --endpoints=${ETCD_ENDPOINTS} \
  --ca-file=/etc/kubernetes/cert/ca.pem \
  --cert-file=/etc/flanneld/cert/flanneld.pem \
  --key-file=/etc/flanneld/cert/flanneld-key.pem \
  get ${FLANNEL_ETCD_PREFIX}/subnets/172.30.81.0-24

注意:這裏的後面subnets的值要修改成你上面查看的容器網段

結果如下圖所示

10.驗證各節點能通過Pod網段互通

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh ${node_ip} "/usr/sbin/ip addr show flannel.1|grep -w inet"
  done

查詢結果如下圖

11.在各節點上ping所有flannel接口 IP,確保能通

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh ${node_ip} "ping -c 1 172.30.97.0"
    ssh ${node_ip} "ping -c 1 172.30.46.0"
    ssh ${node_ip} "ping -c 1 172.30.26.0"
  done

注意:這裏的網段結合上面查詢結果更改

六、部署 master 節點

1.下載最新版本的二進制文件

wget https://dl.k8s.io/v1.10.4/kubernetes-server-linux-amd64.tar.gz

tar zxf kubernetes-server-linux-amd64.tar.gz

cd kubernetes

tar zxf  kubernetes-src.tar.gz

將二進制文件拷貝到所有 master 節點

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp server/bin/* k8s@${node_ip}:/opt/k8s/bin/
    ssh k8s@${node_ip} "chmod +x /opt/k8s/bin/*"
  done

七、haproxy部署(部署高可用組件)

1.爲三個節點安裝軟件

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "yum install -y keepalived haproxy"
  done

2.配置和下發haproxy配置文件

 vim haproxy.cfg

global
    log /dev/log    local0
    log /dev/log    local1 notice
    chroot /var/lib/haproxy
    stats socket /var/run/haproxy-admin.sock mode 660 level admin
    stats timeout 30s
    user haproxy
    group haproxy
    daemon
    nbproc 1

defaults
    log     global
    timeout connect 5000
    timeout client  10m
    timeout server  10m

listen  admin_stats
    bind 0.0.0.0:10080
    mode http
    log 127.0.0.1 local0 err
    stats refresh 30s
    stats uri /status
    stats realm welcome login\ Haproxy
    stats auth admin:123456
    stats hide-version
    stats admin if TRUE

listen kube-master
    bind 0.0.0.0:8443
    mode tcp
    option tcplog
    balance source
    server 192.168.11.25 192.168.11.25:6443 check inter 2000 fall 2 rise 2 weight 1
    server 192.168.11.26 192.168.11.26:6443 check inter 2000 fall 2 rise 2 weight 1
    server 192.168.11.27 192.168.11.27:6443 check inter 2000 fall 2 rise 2 weight 1

3.啓動haproxy服務

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl restart haproxy"
  done

4.檢查haproxy服務狀態

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl status haproxy|grep Active"
  done

狀態爲running纔是成功啓動了

如果服務不是活動狀態就再運行一次啓動服務的腳本可能就啓動了,還是不行則查看日誌文件:journalctl -u haproxy

5.檢查haproxy是否監聽8443端口

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "netstat -lnpt|grep haproxy"
  done

結果如下圖

6.配置和下發keepalived配置文件

環境分配:

master: 192.168.11.25

backup:192.168.11.26和192.168.11.27

master 配置文件:

vim keepalived-master.conf

global_defs {
    router_id lb-master-105
}

vrrp_script check-haproxy {
    script "killall -0 haproxy"
    interval 5
    weight -30
}

vrrp_instance VI-kube-master {
    state MASTER
    priority 120
    dont_track_primary
    interface eno16777736
    virtual_router_id 68
    advert_int 3
    track_script {
        check-haproxy
    }
    virtual_ipaddress {
        192.168.11.24
    }
}

backup 配置文件:

vim keepalived-backup.conf

global_defs {
    router_id lb-backup-105
}

vrrp_script check-haproxy {
    script "killall -0 haproxy"
    interval 5
    weight -30
}

vrrp_instance VI-kube-master {
    state BACKUP
    priority 110
    dont_track_primary
    interface eno16777736
    virtual_router_id 68
    advert_int 3
    track_script {
        check-haproxy
    }
    virtual_ipaddress {
        192.168.11.24
    }
}

7.下發keepalived配置文件

master配置文件:

scp keepalived-master.conf [email protected]:/etc/keepalived/keepalived.conf

backup配置文件:

scp keepalived-backup.conf [email protected]:/etc/keepalived/keepalived.conf

scp keepalived-backup.conf [email protected]:/etc/keepalived/keepalived.conf

8.啓動keepalived 服務

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl restart keepalived"
  done

9.檢查keepalived服務啓動狀態

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl status keepalived|grep Active"
  done

狀態爲running纔是成功啓動了

如果服務不是活動狀態就再運行一次啓動服務的腳本可能就啓動了,還是不行則查看日誌文件:journalctl -u keepalived

10.查看VIP所在的節點,確保可以ping通VIP

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh ${node_ip} "/usr/sbin/ip addr show ${VIP_IF}"
    ssh ${node_ip} "ping -c 1 ${MASTER_VIP}"
  done

11.查看haproxy狀態頁面,訪問master的IP地址的10080端口

比如我的網址就是:http://192.168.11.25:10080/status

注意:默認沒有修改的話用戶爲admin,密碼爲123456

如下圖

八、部署 kube-apiserver 組件

1.創建kubernetes證書和私鑰

vim kubernetes-csr.json

{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "192.168.11.25",
    "192.168.11.26",
    "192.168.11.27",
    "",
    "",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "4Paradigm"
    }
  ]
}

生成證書和私鑰

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
  -ca-key=/etc/kubernetes/cert/ca-key.pem \
  -config=/etc/kubernetes/cert/ca-config.json \
  -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

查看生成的證書和私鑰:ls kubernetes*pem

將生成的證書和私鑰文件分發到master節點

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p /etc/kubernetes/cert/ && sudo chown -R k8s /etc/kubernetes/cert/"
    scp kubernetes*.pem k8s@${node_ip}:/etc/kubernetes/cert/
  done

2.創建加密配置文件

vim encryption-config.yaml

kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: 0Av9Vvddzvr6l4vmpHNAdq2RgMTvdrhdxr9x3H39Jsc=
      - identity: {}

3.將加密配置文件拷貝到節點的/etc/kubernetes/目錄下

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp encryption-config.yaml root@${node_ip}:/etc/kubernetes/
  done

4.創建kube-apiserver systemd unit模板文件

vim kube-apiserver.service.template

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
ExecStart=/opt/k8s/bin/kube-apiserver \
  --enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
  --anonymous-auth=false \
  --experimental-encryption-provider-config=/etc/kubernetes/encryption-config.yaml \
  --advertise-address=##NODE_IP## \
  --bind-address=##NODE_IP## \
  --insecure-port=0 \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all \
  --enable-bootstrap-token-auth \
  --service-cluster-ip-range=10.254.0.0/16 \
  --service-node-port-range=8400-9000 \
  --tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \
  --tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \
  --client-ca-file=/etc/kubernetes/cert/ca.pem \
  --kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \
  --kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \
  --service-account-key-file=/etc/kubernetes/cert/ca-key.pem \
  --etcd-cafile=/etc/kubernetes/cert/ca.pem \
  --etcd-certfile=/etc/kubernetes/cert/kubernetes.pem \
  --etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem \
  --etcd-servers=https://192.168.11.25:2379,https://192.168.11.26:2379,https://192.168.11.27:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --apiserver-count=3 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/log/kube-apiserver-audit.log \
  --event-ttl=1h \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5
Type=notify
User=k8s
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

5.爲各節點創建和分發kube-apiserver systemd unit文件

for (( i=0; i < 3; i++ ))
  do
    sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-apiserver.service.template > kube-apiserver-${NODE_IPS[i]}.service 
  done

查看生成的unit文件:ls kube-apiserver*.service

分發生成的systemd unit文件

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p /var/log/kubernetes && chown -R k8s /var/log/kubernetes"
    scp kube-apiserver-${node_ip}.service root@${node_ip}:/etc/systemd/system/kube-apiserver.service
  done

6.啓動kube-apiserver服務

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver"
  done

7.檢查kube-apiserver運行狀態

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl status kube-apiserver |grep 'Active:'"
  done

狀態爲running纔是成功啓動了

如果服務不是活動狀態就再運行一次啓動服務的腳本可能就啓動了,還是不行則查看日誌文件:journalctl -u kube-apiserver

8.打印kube-apiserver寫入etcd的數據

ETCDCTL_API=3 etcdctl \
    --endpoints=${ETCD_ENDPOINTS} \
    --cacert=/etc/kubernetes/cert/ca.pem \
    --cert=/etc/etcd/cert/etcd.pem \
    --key=/etc/etcd/cert/etcd-key.pem \
    get /registry/ --prefix --keys-only

9.檢查羣集狀態命令

kubectl cluster-info

kubectl get all --all-namespaces

kubectl get componentstatuses

10.檢查kube-apiserver監聽的端口

netstat -lnpt|grep kube

11.授予kubernetes 證書訪問 kubelet API的權限

kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

九、部署高可用 kube-controller-manager 集羣

1.創建kube-controller-manager證書和私鑰

vim kube-controller-manager-csr.json

{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "192.168.11.25",
      "192.168.11.26",
      "192.168.11.27"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "BeiJing",
        "L": "BeiJing",
        "O": "system:kube-controller-manager",
        "OU": "4Paradigm"
      }
    ]
}

生成證書和私鑰

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
  -ca-key=/etc/kubernetes/cert/ca-key.pem \
  -config=/etc/kubernetes/cert/ca-config.json \
  -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

將生成的證書和私鑰分發到所有master節點

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kube-controller-manager*.pem k8s@${node_ip}:/etc/kubernetes/cert/
  done

2.創建和分發kubeconfig文件

kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/cert/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
  --client-certificate=kube-controller-manager.pem \
  --client-key=kube-controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context system:kube-controller-manager \
  --cluster=kubernetes \
  --user=system:kube-controller-manager \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

分發kubeconfig到所有master節點

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kube-controller-manager.kubeconfig k8s@${node_ip}:/etc/kubernetes/
  done

3.創建和分發kube-controller-manager systemd unit文件

vim kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/k8s/bin/kube-controller-manager \
  --port=0 \
  --secure-port=10252 \
  --bind-address=127.0.0.1 \
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
  --service-cluster-ip-range=10.254.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \
  --experimental-cluster-signing-duration=8760h \
  --root-ca-file=/etc/kubernetes/cert/ca.pem \
  --service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem \
  --leader-elect=true \
  --feature-gates=RotateKubeletServerCertificate=true \
  --controllers=*,bootstrapsigner,tokencleaner \
  --horizontal-pod-autoscaler-use-rest-clients=true \
  --horizontal-pod-autoscaler-sync-period=10s \
  --tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem \
  --tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem \
  --use-service-account-credentials=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on
Restart=on-failure
RestartSec=5
User=k8s

[Install]
WantedBy=multi-user.target

分發systemd unit文件到所有master節點

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kube-controller-manager.service root@${node_ip}:/etc/systemd/system/
  done

4.啓動kube-controller-manager服務

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p /var/log/kubernetes && chown -R k8s /var/log/kubernetes"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager"
  done

5.檢查服務運行狀態

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh k8s@${node_ip} "systemctl status kube-controller-manager|grep Active"
  done

狀態爲running纔是成功啓動了

如果服務不是活動狀態就再運行一次啓動服務的腳本可能就啓動了,還是不行則查看日誌文件:journalctl -u kube-controller-manager

6.查看輸出的metric

netstat -anpt | grep kube-controll

kube-controller-manager監聽10252端口,接收https請求

curl -s --cacert /etc/kubernetes/cert/ca.pem https://127.0.0.1:10252/metrics | head

7.測試 kube-controller-manager 集羣的高可用

停掉一個或兩個節點的 kube-controller-manager 服務,觀察其它節點的日誌,看是否獲取了leader權限

8.查看當前的 leader

kubectl get endpoints kube-controller-manager --namespace=kube-system  -o yaml

十、部署高可用kube-scheduler集羣

1.創建kube-scheduler證書和私鑰

vim kube-scheduler-csr.json

{
    "CN": "system:kube-scheduler",
    "hosts": [
      "127.0.0.1",
      "192.168.11.25",
      "192.168.11.26",
      "192.168.11.27"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "BeiJing",
        "L": "BeiJing",
        "O": "system:kube-scheduler",
        "OU": "4Paradigm"
      }
    ]
}

生成證書和私鑰

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
  -ca-key=/etc/kubernetes/cert/ca-key.pem \
  -config=/etc/kubernetes/cert/ca-config.json \
  -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

2.創建和分發kubeconfig文件

kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/cert/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
  --client-certificate=kube-scheduler.pem \
  --client-key=kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-scheduler.kubeconfig
kubectl config set-context system:kube-scheduler \
  --cluster=kubernetes \
  --user=system:kube-scheduler \
  --kubeconfig=kube-scheduler.kubeconfig
kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

分發 kubeconfig 到所有 master 節點

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kube-scheduler.kubeconfig k8s@${node_ip}:/etc/kubernetes/
  done

3.創建和分發kube-scheduler systemd unit文件

 vim kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/k8s/bin/kube-scheduler \\
  --address=127.0.0.1 \\
  --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
  --leader-elect=true \\
  --alsologtostderr=true \\
  --logtostderr=false \\
  --log-dir=/var/log/kubernetes \\
  --v=2
Restart=on-failure
RestartSec=5
User=k8s

[Install]
WantedBy=multi-user.target

分發systemd unit文件到所有master節點

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kube-scheduler.service root@${node_ip}:/etc/systemd/system/
  done

4.啓動kube-scheduler服務

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p /var/log/kubernetes && chown -R k8s /var/log/kubernetes"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler"
  done

5.檢查服務運行狀態

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh k8s@${node_ip} "systemctl status kube-scheduler|grep Active"
  done

狀態爲running纔是成功啓動了

如果服務不是活動狀態就再運行一次啓動服務的腳本可能就啓動了,還是不行則查看日誌文件:journalctl -u kube-scheduler

6.查看輸出的metric

netstat -anpt | grep kube-scheduler

kube-scheduler監聽10251端口

curl -s https://127.0.0.1:10251/metrics | head

7.查看當前的leader

kubectl get endpoints kube-scheduler --namespace=kube-system -o yaml

十一、部署worker節點

1.安裝依賴包(三臺都要安裝部署)

yum install -y epel-release conntrack ipvsadm ipset jq iptables curl sysstat libseccomp

/usr/sbin/modprobe ip_vs

十二、部署docker組件

1.下載安裝docker

wget https://download.docker.com/linux/static/stable/x86_64/docker-18.03.1-ce.tgz

tar zxf docker-18.03.1-ce.tgz

2.分發到各節點

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp docker/docker*  k8s@${node_ip}:/opt/k8s/bin/
    ssh k8s@${node_ip} "chmod +x /opt/k8s/bin/*"
  done

3.創建和分發systemd unit文件

vim docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.io

[Service]
Environment="PATH=/opt/k8s/bin:/bin:/sbin:/usr/bin:/usr/sbin"
EnvironmentFile=-/run/flannel/docker
ExecStart=/opt/k8s/bin/dockerd --log-level=error $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target

分發systemd unit 文件到所有worker機器:

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp docker.service root@${node_ip}:/etc/systemd/system/
  done

4.配置和分發 docker 配置文件

vim docker-daemon.json

{
    "registry-mirrors": ["https://hub-mirror.c.163.com", "https://docker.mirrors.ustc.edu.cn"],
    "max-concurrent-downloads": 20
}

分發docker配置文件到所有work節點

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p  /etc/docker/"
    scp docker-daemon.json root@${node_ip}:/etc/docker/daemon.json
  done

5.啓動docker服務

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl stop firewalld && systemctl disable firewalld"
    ssh root@${node_ip} "/usr/sbin/iptables -F && /usr/sbin/iptables -X && /usr/sbin/iptables -F -t nat && /usr/sbin/iptables -X -t nat"
    ssh root@${node_ip} "/usr/sbin/iptables -P FORWARD ACCEPT"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable docker && systemctl restart docker"
    ssh root@${node_ip} 'for intf in /sys/devices/virtual/net/docker0/brif/*; do echo 1 > $intf/hairpin_mode; done'
    ssh root@${node_ip} "sudo sysctl -p /etc/sysctl.d/kubernetes.conf"
  done

6.檢查docker服務啓動狀態

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh k8s@${node_ip} "systemctl status docker|grep Active"
  done

狀態爲running纔是成功啓動了

如果服務不是活動狀態就再運行一次啓動服務的腳本可能就啓動了,還是不行則查看日誌文件:journalctl -u docker

7.檢查docker0網橋

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh k8s@${node_ip} "/usr/sbin/ip addr show flannel.1 && /usr/sbin/ip addr show docker0"
  done

確認各work節點的docker0網橋和 flannel.1接口的IP處於同一個網段中

十三、部署kubelet組件

1.創建kubelet bootstrap kubeconfig文件

for node_name in ${NODE_NAMES[@]}
  do
    echo ">>> ${node_name}"
#創建 token
    export BOOTSTRAP_TOKEN=$(kubeadm token create \
      --description kubelet-bootstrap-token \
      --groups system:bootstrappers:${node_name} \
      --kubeconfig ~/.kube/config)

#設置集羣參數
    kubectl config set-cluster kubernetes \
      --certificate-authority=/etc/kubernetes/cert/ca.pem \
      --embed-certs=true \
      --server=${KUBE_APISERVER} \
      --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig

#設置客戶端認證的參數
    kubectl config set-credentials kubelet-bootstrap \
      --token=${BOOTSTRAP_TOKEN} \
      --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
#設置上下文的參數
    kubectl config set-context default \
      --cluster=kubernetes \
      --user=kubelet-bootstrap \
      --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig

#設置默認上下文
    kubectl config use-context default --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
  done

查看kubeadm爲各節點創建的token

kubeadm token list --kubeconfig ~/.kube/config

查看各token關聯的Secret

kubectl get secrets -n kube-system

2.分發bootstrap kubeconfig文件到所有worker節點

for node_name in ${NODE_NAMES[@]}
  do
    echo ">>> ${node_name}"
    scp kubelet-bootstrap-${node_name}.kubeconfig k8s@${node_name}:/etc/kubernetes/kubelet-bootstrap.kubeconfig
  done

3.創建和分發kubelet參數配置文件

vim kubelet.config.json.template

{
  "kind": "KubeletConfiguration",
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "authentication": {
    "x509": {
      "clientCAFile": "/etc/kubernetes/cert/ca.pem"
    },
    "webhook": {
      "enabled": true,
      "cacheTTL": "2m0s"
    },
    "anonymous": {
      "enabled": false
    }
  },
  "authorization": {
    "mode": "Webhook",
    "webhook": {
      "cacheAuthorizedTTL": "5m0s",
      "cacheUnauthorizedTTL": "30s"
    }
  },
  "address": "##NODE_IP##",
  "port": 10250,
  "readOnlyPort": 0,
  "cgroupDriver": "cgroupfs",
  "hairpinMode": "promiscuous-bridge",
  "serializeImagePulls": false,
  "featureGates": {
    "RotateKubeletClientCertificate": true,
    "RotateKubeletServerCertificate": true
  },
  "clusterDomain": "${CLUSTER_DNS_DOMAIN}",
  "clusterDNS": ["${CLUSTER_DNS_SVC_IP}"]
}

爲各節點創建和分發kubelet配置文件

for node_ip in ${NODE_IPS[@]}
  do 
    echo ">>> ${node_ip}"
    sed -e "s/##NODE_IP##/${node_ip}/" kubelet.config.json.template > kubelet.config-${node_ip}.json
    scp kubelet.config-${node_ip}.json root@${node_ip}:/etc/kubernetes/kubelet.config.json
  done

4.創建和分發kubelet systemd unit文件

vim kubelet.service.template

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/opt/k8s/bin/kubelet \\
  --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\
  --cert-dir=/etc/kubernetes/cert \\
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
  --config=/etc/kubernetes/kubelet.config.json \\
  --hostname-override=##NODE_NAME## \\
  --pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest \\
  --allow-privileged=true \\
  --alsologtostderr=true \\
  --logtostderr=false \\
  --log-dir=/var/log/kubernetes \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

爲各節點創建和分發kubelet systemd unit文件

for node_name in ${NODE_NAMES[@]}
  do 
    echo ">>> ${node_name}"
    sed -e "s/##NODE_NAME##/${node_name}/" kubelet.service.template > kubelet-${node_name}.service
    scp kubelet-${node_name}.service root@${node_name}:/etc/systemd/system/kubelet.service
  done

5.關於kubelet啓動失敗錯誤日誌(Bootstrap Token Auth和授予權限)

默認情況下,這個 user 和 group 沒有創建 CSR 的權限,kubelet 啓動失敗,錯誤日誌如下

journalctl -u kubelet -a | grep -A 2 'certificatesigningrequests'

May 06 06:42:36 kube-node1 kubelet[26986]: F0506 06:42:36.314378   26986 server.go:233] failed to run Kubelet: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "system:bootstrap:lemy40" cannot create certificatesigningrequests.certificates.k8s.io at the cluster scope
May 06 06:42:36 kube-node1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
May 06 06:42:36 kube-node1 systemd[1]: kubelet.service: Failed with result 'exit-code'.

解決方法就是創建一個clusterrolebinding

create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers

6.啓動kubelet服務

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p /var/lib/kubelet"
    ssh root@${node_ip} "/usr/sbin/swapoff -a"
    ssh root@${node_ip} "mkdir -p /var/log/kubernetes && chown -R k8s /var/log/kubernetes"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"
  done

7.approve kubelet CSR請求

手動 approve CSR 請求

查看CSR列表:kubectl get csr

NAME                                                   AGE       REQUESTOR                 CONDITION
node-csr-QzuuQiuUfcSdp3j5W4B2UOuvQ_n9aTNHAlrLzVFiqrk   43s       system:bootstrap:zkiem5   Pending
node-csr-oVbPmU-ikVknpynwu0Ckz_MvkAO_F1j0hmbcDa__sGA   27s       system:bootstrap:mkus5s   Pending
node-csr-u0E1-ugxgotO_9FiGXo8DkD6a7-ew8sX2qPE6KPS2IY   13m       system:bootstrap:k0s2bj   Pending

kubectl certificate approve node-csr-QzuuQiuUfcSdp3j5W4B2UOuvQ_n9aTNHAlrLzVFiqrk
certificatesigningrequest.certificates.k8s.io "node-csr-QzuuQiuUfcSdp3j5W4B2UOuvQ_n9aTNHAlrLzVFiqrk" approved

查看Approve結果:kubectl describe  csr node-csr-QzuuQiuUfcSdp3j5W4B2UOuvQ_n9aTNHAlrLzVFiqrk

自動approve CSR請求

vim csr-crb.yaml

 # Approve all CSRs for the group "system:bootstrappers"
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: auto-approve-csrs-for-group
 subjects:
 - kind: Group
   name: system:bootstrappers
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
   apiGroup: rbac.authorization.k8s.io
---
 # To let a node of the group "system:nodes" renew its own credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-client-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
   apiGroup: rbac.authorization.k8s.io
---
# A ClusterRole which instructs the CSR approver to approve a node requesting a
# serving cert matching its client cert.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: approve-node-server-renewal-csr
rules:
- apiGroups: ["certificates.k8s.io"]
  resources: ["certificatesigningrequests/selfnodeserver"]
  verbs: ["create"]
---
 # To let a node of the group "system:nodes" renew its own server credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-server-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: approve-node-server-renewal-csr
   apiGroup: rbac.authorization.k8s.io

生效配置文件:kubectl apply -f csr-crb.yaml

8.查看kublet的情況

kubectl get csr

kubectl get nodes

9.kublet api認證和授權

kublet配置瞭如下認證參數

authentication.anonymous.enabled:設置爲 false,不允許匿名訪問 10250 端口

authentication.x509.clientCAFile:指定簽名客戶端證書的 CA 證書,開啓 HTTPs 證書認證

authentication.webhook.enabled=true:開啓 HTTPs bearer token 認證

authroization.mode=Webhook:開啓 RBAC 授權

curl -s --cacert /etc/kubernetes/cert/ca.pem https://192.168.11.25:10250/metrics
Unauthorized

curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer 123456" https://192.168.11.25:10250/metrics
Unauthorized

證書認證和授權:

權限不足的證書

curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kube-controller-manager.pem --key /etc/kubernetes/cert/kube-controller-manager-key.pem https://172.27.129.111:10250/metrics
Forbidden (user=system:kube-controller-manager, verb=get, resource=nodes, subresource=metrics)

使用部署kubectl命令行工具時創建的、具有最高權限的admin證書

curl -s --cacert /etc/kubernetes/cert/ca.pem --cert ./admin.pem --key ./admin-key.pem https://172.27.129.111:10250/metrics|head

bear token認證和授權

kubectl create sa kubelet-api-test

kubectl create clusterrolebinding kubelet-api-test --clusterrole=system:kubelet-api-admin --serviceaccount=default:kubelet-api-test
SECRET=$(kubectl get secrets | grep kubelet-api-test | awk '{print $1}')
TOKEN=$(kubectl describe secret ${SECRET} | grep -E '^token' | awk '{print $2}')
echo ${TOKEN}

curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer ${TOKEN}" https://172.27.129.111:10250/metrics|head

10.cadvisor和metrics

瀏覽器訪問https://192.168.11.25:4194/containers/可以查看到 cadvisor 的監控頁面

瀏覽器訪問https://192.168.11.25:10250/metricshttps://172.27.129.80:10250/metrics/cadvisor分別返回kublet和cadvisor的metrics

11.獲取kublet的配置

使用部署kubectl命令行工具時創建的、具有最高權限的admin證書

curl -sSL --cacert /etc/kubernetes/cert/ca.pem --cert ./admin.pem --key ./admin-key.pem ${KUBE_APISERVER}/api/v1/nodes/kube-node1/proxy/configz | jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"'

{
  "syncFrequency": "1m0s",
  "fileCheckFrequency": "20s",
  "httpCheckFrequency": "20s",
  "address": "172.27.129.80",
  "port": 10250,
  "readOnlyPort": 10255,
  "authentication": {
    "x509": {},
    "webhook": {
      "enabled": false,
      "cacheTTL": "2m0s"
    },
    "anonymous": {
      "enabled": true
    }
  },
  "authorization": {
    "mode": "AlwaysAllow",
    "webhook": {
      "cacheAuthorizedTTL": "5m0s",
      "cacheUnauthorizedTTL": "30s"
    }
  },
  "registryPullQPS": 5,
  "registryBurst": 10,
  "eventRecordQPS": 5,
  "eventBurst": 10,
  "enableDebuggingHandlers": true,
  "healthzPort": 10248,
  "healthzBindAddress": "127.0.0.1",
  "oomScoreAdj": -999,
  "clusterDomain": "cluster.local.",
  "clusterDNS": [
    "10.254.0.2"
  ],
  "streamingConnectionIdleTimeout": "4h0m0s",
  "nodeStatusUpdateFrequency": "10s",
  "imageMinimumGCAge": "2m0s",
  "imageGCHighThresholdPercent": 85,
  "imageGCLowThresholdPercent": 80,
  "volumeStatsAggPeriod": "1m0s",
  "cgroupsPerQOS": true,
  "cgroupDriver": "cgroupfs",
  "cpuManagerPolicy": "none",
  "cpuManagerReconcilePeriod": "10s",
  "runtimeRequestTimeout": "2m0s",
  "hairpinMode": "promiscuous-bridge",
  "maxPods": 110,
  "podPidsLimit": -1,
  "resolvConf": "/etc/resolv.conf",
  "cpuCFSQuota": true,
  "maxOpenFiles": 1000000,
  "contentType": "application/vnd.kubernetes.protobuf",
  "kubeAPIQPS": 5,
  "kubeAPIBurst": 10,
  "serializeImagePulls": false,
  "evictionHard": {
    "imagefs.available": "15%",
    "memory.available": "100Mi",
    "nodefs.available": "10%",
    "nodefs.inodesFree": "5%"
  },
  "evictionPressureTransitionPeriod": "5m0s",
  "enableControllerAttachDetach": true,
  "makeIPTablesUtilChains": true,
  "iptablesMasqueradeBit": 14,
  "iptablesDropBit": 15,
  "featureGates": {
    "RotateKubeletClientCertificate": true,
    "RotateKubeletServerCertificate": true
  },
  "failSwapOn": true,
  "containerLogMaxSize": "10Mi",
  "containerLogMaxFiles": 5,
  "enforceNodeAllocatable": [
    "pods"
  ],
  "kind": "KubeletConfiguration",
  "apiVersion": "kubelet.config.k8s.io/v1beta1"
}

十四、部署kube-proxy組件

1.創建kube-proxy證書

vim kube-proxy-csr.json

{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "4Paradigm"
    }
  ]
}

生成證書和私鑰

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
  -ca-key=/etc/kubernetes/cert/ca-key.pem \
  -config=/etc/kubernetes/cert/ca-config.json \
  -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy

2.創建和分發kubeconfig文件

kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/cert/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
  --client-certificate=kube-proxy.pem \
  --client-key=kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

分發kubeconfig文件

for node_name in ${NODE_NAMES[@]}
  do
    echo ">>> ${node_name}"
    scp kube-proxy.kubeconfig k8s@${node_name}:/etc/kubernetes/
  done

3.創建kube-proxy配置文件

vim kube-proxy.config.yaml.template

apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: ##NODE_IP##
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: ${CLUSTER_CIDR}
healthzBindAddress: ##NODE_IP##:10256
hostnameOverride: ##NODE_NAME##
kind: KubeProxyConfiguration
metricsBindAddress: ##NODE_IP##:10249
mode: "ipvs"

爲各節點創建和分發kube-proxy配置文件

for (( i=0; i < 3; i++ ))
  do 
    echo ">>> ${NODE_NAMES[i]}"
    sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-proxy.config.yaml.template > kube-proxy-${NODE_NAMES[i]}.config.yaml
    scp kube-proxy-${NODE_NAMES[i]}.config.yaml root@${NODE_NAMES[i]}:/etc/kubernetes/kube-proxy.config.yaml
  done

4.創建和分發kube-proxy systemd unit文件

vim kube-proxy.service

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/opt/k8s/bin/kube-proxy \\
  --config=/etc/kubernetes/kube-proxy.config.yaml \\
  --alsologtostderr=true \\
  --logtostderr=false \\
  --log-dir=/var/log/kubernetes \\
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

分發kube-proxy systemd unit文件

for node_name in ${NODE_NAMES[@]}
  do 
    echo ">>> ${node_name}"
    scp kube-proxy.service root@${node_name}:/etc/systemd/system/
  done

5.啓動kube-proxy服務

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p /var/lib/kube-proxy"
    ssh root@${node_ip} "mkdir -p /var/log/kubernetes && chown -R k8s /var/log/kubernetes"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"
  done

6.檢查啓動結果

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh k8s@${node_ip} "systemctl status kube-proxy|grep Active"
  done

狀態爲running纔是成功啓動了

如果服務不是活動狀態就再運行一次啓動服務的腳本可能就啓動了,還是不行則查看日誌文件:journalctl -u kube-proxy

7.查看監聽端口和metrics

netstat -anpt | grep kube-prox

10249:http prometheus metrics port

10256:http healthz port

查看ipvs路由規則

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "/usr/sbin/ipvsadm -ln"
  done

最後檢測羣集的功能

檢查節點狀態:kubectl get nodes

查看pod運行狀態:kubectl get pods -o wide

部署dashboard插件

修改配置文件/opt/k8s/kubernetes/cluster/addons/dashboard

cp dashboard-controller.yaml{,.orig}

diff dashboard-controller.yaml{,.orig}

cp dashboard-service.yaml{,.orig}

diff dashboard-service.yaml.orig dashboard-service.yaml

執行所有定義文件:ls *.yaml

kubectl create -f *.yaml

kubectl get deployment kubernetes-dashboard  -n kube-system

NAME                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kubernetes-dashboard   1         1         1            1           2m

kubectl --namespace kube-system get pods -o wide

NAME                                    READY     STATUS    RESTARTS   AGE       IP            NODE
coredns-77c989547b-6l6jr                1/1       Running   0          58m       172.30.39.3   kube-node3
coredns-77c989547b-d9lts                1/1       Running   0          58m       172.30.81.3   kube-node1
kubernetes-dashboard-65f7b4f486-wgc6j   1/1       Running   0          2m        172.30.81.5   kube-node1

kubectl get services kubernetes-dashboard -n kube-system

NAME                   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes-dashboard   NodePort   10.254.96.204   <none>        443:8607/TCP   2m

然後就可以訪問dashboard了

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章