kubernetes1.9高可用集羣安裝(使用kubeadm工具)

前面我們安裝了一個簡單的kubernetes集羣,選用了1個master節點和三個node節點。etcd也沒有安裝成集羣.
這次我們安裝一個3個master節點+etcd集羣的kubernetes集羣.

節點規劃

本次選用三個master節點,三個node節點來安裝k8s集羣。 
etcd集羣安裝在master節點上.
並準備一個虛擬ip來做keepalived。  

節點 IP
M0 10.xx.xx.xx
M1 10.xx.xx.xx
M2 10.xx.xx.xx
N0 10.xx.xx.xx
N1 10.xx.xx.xx
N2 10.xx.xx.xx

virtual_ipaddress: 10.xx.xx.xx


集羣啓動前的準備(請用root用戶執行)

節點準備工作(在每臺機器上執行)

包括修改主機名,關閉防火牆等操作。  
k8s集羣會識別主機名字,確保每個主機名設爲不同值。  
關閉防火牆是爲了避免不必要的網絡問題。  

# ${hostname}變量請替換成規劃的主機名,比如M0, N0, N1
sudo hostnamectl set-hostname ${hostname}
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i -re '/^\s*SELINUX=/s/^/#/' -e '$i\\SELINUX=disabled' /etc/selinux/config

建立ssh的互信,方便後面傳文件什麼的。可以使用ssh-copy-id命令快速建立,也可以自己手動建立。這個網上教程很多,自己搜一下

安裝docker(在每臺機器上執行)

yum install docker -y
systemctl enable docker && systemctl start docker

修改docker的log driver爲json-file,這個不影響安裝,只是爲了後期安裝efk日誌收集系統方便。
docker info可以查看當前log driver,centos7默認使用journald.
不同版本的docker可能修改方式不一樣,最新官網文檔是修改/etc/docker/daemon.json文件,我安裝的版本是1.12.6,按如下方式修改。

vim /etc/sysconfig/docker 
# 修改爲如下,然後重啓docker OPTIONS='--selinux-enabled --log-driver=json-file --signature-verification=false' systemctl restart docker

安裝kubeadm, kubelet, kubectl(每臺機器上執行)

  • kubeadm: 快速創建k8s集羣的工具
  • kubelet: k8s的基礎組件,負責對pod和container的創建和管理,與k8s集羣master建立聯繫
  • kubectl: k8s的客戶端工具,用來像集羣發送命名
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet kubeadm kubectl

官網文檔上寫一些用戶在RHEL/Centos7系統上安裝時,由於iptables被繞過導致路由錯誤,需要在
sysctl的config文件中將net.bridge.bridge-nf-call-iptables設置爲1.

cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

啓動kubelet:

systemctl enable kubelet && systemctl start kubelet

至此,準備工作就做好了。目前每隔幾秒kubelet就會重啓,直到收到kubeadm的命令。  
所以用systemctl status kubelet看到kubelet沒有啓動是正常現象,可以多執行幾次查看,就會發現kubelet處於不斷停止和重啓的狀態.

安裝etcd集羣(在三個master節點安裝)

創建etcd CA證書

  1. 安裝cfsslsfssljson

    curl -o /usr/local/bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
    curl -o /usr/local/bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
    chmod +x /usr/local/bin/cfssl*
  2. ssh到etcd0節點(我這裏規劃的是master0節點),執行下面命令
    執行完成可以看到/etc/kubernetes/pki/etcd文件夾下生成了ca-config.json和ca-csr.json兩個文件

    mkdir -p /etc/kubernetes/pki/etcd
    cd /etc/kubernetes/pki/etcd
    
    cat >ca-config.json <<EOF
    {
     "signing": {
     "default": {
     "expiry": "43800h"
     },
     "profiles": {
     "server": {
     "expiry": "43800h",
     "usages": [
     "signing",
     "key encipherment",
     "server auth",
     "client auth"
     ]
     },
     "client": {
     "expiry": "43800h",
     "usages": [
     "signing",
     "key encipherment",
     "client auth"
     ]
     },
     "peer": {
     "expiry": "43800h",
     "usages": [
     "signing",
     "key encipherment",
     "server auth",
     "client auth"
     ]
     }
     }
     }
    }
    EOF
    
    cat >ca-csr.json <<EOF
    {
     "CN": "etcd",
     "key": {
     "algo": "rsa",
     "size": 2048
     }
    }
    EOF
  3. 生成ca證書

    cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

生成etcd客戶端證書

在etcd0節點執行以下操作,會生成兩個文件client.pem, client-key.pem

cat >client.json <<EOF
{
 "CN": "client",
 "key": {
 "algo": "ecdsa",
 "size": 256
 }
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client.json | cfssljson -bare client

生成etcd的server和peer證書

  1. 設置PEER_NAME和PRIVATE_IP環境變量(在每臺etcd機器上執行)

    # 注意下面ens192是你實際網卡的名字,有可能是eth1之類的。用ip addr查看。
    export PEER_NAME=$(hostname)
    export PRIVATE_IP=$(ip addr show ens192 | grep -Po 'inet \K[\d.]+')
  2. 將剛剛在etcd上生成的CA拷貝到另外兩臺etcd機器上(在兩臺etch peers上執行)。
    這裏需要ssh信任權限,這個在上面已經讓你建立好了。

    mkdir -p /etc/kubernetes/pki/etcd
    cd /etc/kubernetes/pki/etcd
    scp root@<etcd0-ip-address>:/etc/kubernetes/pki/etcd/ca.pem .
    scp root@<etcd0-ip-address>:/etc/kubernetes/pki/etcd/ca-key.pem .
    scp root@<etcd0-ip-address>:/etc/kubernetes/pki/etcd/client.pem .
    scp root@<etcd0-ip-address>:/etc/kubernetes/pki/etcd/client-key.pem .
    scp root@<etcd0-ip-address>:/etc/kubernetes/pki/etcd/ca-config.json .
  3. 在所有etcd機器上執行下面命令,生成peer.pem, peer-key.pem, server.pem, server-key.pem

    cfssl print-defaults csr > config.json
    sed -i '0,/CN/{s/example\.net/'"$PEER_NAME"'/}' config.json
    sed -i 's/www\.example\.net/'"$PRIVATE_IP"'/' config.json
    sed -i 's/example\.net/'"$PEER_NAME"'/' config.json
    
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server config.json | cfssljson -bare server
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer config.json | cfssljson -bare peer

啓動etcd集羣(在每臺etcd機器上執行)

這裏有兩種方式:在虛擬機上直接運行或在k8s上運行static pods.我這裏選用第一種,在虛擬機上直接運行.

  1. 安裝etcd

    cd /tmp
    export ETCD_VERSION=v3.1.10
    curl -sSL https://github.com/coreos/etcd/releases/download/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-amd64.tar.gz | tar -xzv --strip-components=1 -C /usr/local/bin/
    rm -rf etcd-$ETCD_VERSION-linux-amd64*
  2. 生成etcd的環境文件,後面將會用到

    touch /etc/etcd.env
    echo "PEER_NAME=$PEER_NAME" >> /etc/etcd.env
    echo "PRIVATE_IP=$PRIVATE_IP" >> /etc/etcd.env
  3. 創建etcd服務systemd的配置文件
    注意修改下面<etcd0-ip-address>等變量爲虛擬機的真實ip地址。m0, m1等爲etcd的名字

    cat >/etc/systemd/system/etcd.service <<EOF
    [Unit]
    Description=etcd
    Documentation=https://github.com/coreos/etcd
    Conflicts=etcd.service
    Conflicts=etcd2.service
    
    [Service]
    EnvironmentFile=/etc/etcd.env
    Type=notify
    Restart=always
    RestartSec=5s
    LimitNOFILE=40000
    TimeoutStartSec=0
    
    ExecStart=/usr/local/bin/etcd --name ${PEER_NAME} \
     --data-dir /var/lib/etcd \
     --listen-client-urls https://${PRIVATE_IP}:2379 \
     --advertise-client-urls https://${PRIVATE_IP}:2379 \
     --listen-peer-urls https://${PRIVATE_IP}:2380 \
     --initial-advertise-peer-urls https://${PRIVATE_IP}:2380 \
     --cert-file=/etc/kubernetes/pki/etcd/server.pem \
     --key-file=/etc/kubernetes/pki/etcd/server-key.pem \
     --client-cert-auth \
     --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem \
     --peer-cert-file=/etc/kubernetes/pki/etcd/peer.pem \
     --peer-key-file=/etc/kubernetes/pki/etcd/peer-key.pem \
     --peer-client-cert-auth \
     --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem \
     --initial-cluster m0=https://<etcd0-ip-address>:2380,m1=https://<etcd1-ip-address>:2380,m2=https://<etcd2-ip-address>:2380 \
     --initial-cluster-token my-etcd-token \
     --initial-cluster-state new
    
    [Install]
    WantedBy=multi-user.target
    EOF
  4. 啓動etcd集羣

    systemctl daemon-reload systemctl start etcd

設置master節點的負載均衡器(keepalived,在三臺master節點上執行)

  1. 安裝keepalived

    yum install keepalived -y
  2. 修改配置文件

    • state: 填寫MASTER(主master節點m0)或BACKUP(其他master節點)
    • interface: 填寫網卡的名字(我這裏是ens192)\
    • priority: 權重,主master節點應該比其他節點高(比如m0填寫101,其他節點填寫100)
    • auth_pass: 任意隨機字符
    • virtual_ipaddress: 應該填寫爲master節點準備的虛擬ip
    ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script check_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 3 weight -2 fall 10 rise 2 } vrrp_instance VI_1 { state <STATE> interface <INTERFACE> virtual_router_id 51 priority <PRIORITY> authentication { auth_type PASS auth_pass 4be37dc3b4c90194d1600c483e10ad1d } virtual_ipaddress { <VIRTUAL-IP> } track_script { check_apiserver } }
  3. 健康檢測腳本
    將下面的<VIRTUAL-IP>替換成準備的虛擬ip

    #!/bin/sh
     errorExit() {
     echo "*** $*" 1>&2
     exit 1
     }
    
     curl --silent --max-time 2 --insecure https://localhost:6443/ -o /dev/null || errorExit "Error GET https://localhost:6443/" if ip addr | grep -q <VIRTUAL-IP>; then
     curl --silent --max-time 2 --insecure https://<VIRTUAL-IP>:6443/ -o /dev/null || errorExit "Error GET https://<VIRTUAL-IP>:6443/" fi
  4. 啓動keepalived

    systemctl start keepalived

啓動k8s集羣

啓動master0節點

  1. 生成配置文件:

    • <private-ip>: 爲master節點的IP地址
    • <etcd0-ip>, <etcd1-ip>, <etcd2-ip> : etcd集羣的ip地址
    • <podCIDR>:POD CIDR,k8s的pod的網絡模式。我這裏選擇flannel,即配置 爲10.244.0.0/16。詳細信息查看CNI network section
    • 爲了安裝flannel,需要在每臺機器上執行sysctl net.bridge.bridge-nf-call-iptables=1
    cat >config.yaml <<EOF
    apiVersion: kubeadm.k8s.io/v1alpha1
    kind: MasterConfiguration
    api:  advertiseAddress: <private-ip>
    etcd:  endpoints:
     - https://<etcd0-ip-address>:2379
     - https://<etcd1-ip-address>:2379
     - https://<etcd2-ip-address>:2379  caFile: /etc/kubernetes/pki/etcd/ca.pem
     certFile: /etc/kubernetes/pki/etcd/client.pem
     keyFile: /etc/kubernetes/pki/etcd/client-key.pem
    networking:  podSubnet: <podCIDR>
    apiServerCertSANs:
    - <load-balancer-ip>
    apiServerExtraArgs:
     apiserver-count: "3"
    EOF
  2. 運行kubeadm

    kubeadm init --config=config.yaml

啓動master1, master2節點

  1. 將剛剛master0生成的文件copy到master1和master2機器

    scp root@<master0-ip-address>:/etc/kubernetes/pki/ca.crt /etc/kubernetes/pki
    scp root@<master0-ip-address>:/etc/kubernetes/pki/ca.key /etc/kubernetes/pki
    scp root@<master0-ip-address>:/etc/kubernetes/pki/sa.key /etc/kubernetes/pki
    scp root@<master0-ip-address>:/etc/kubernetes/pki/sa.pub /etc/kubernetes/pki
    scp root@<master0-ip-address>:/etc/kubernetes/pki/front-proxy-ca.crt /etc/kubernetes/pki
    scp root@<master0-ip-address>:/etc/kubernetes/pki/front-proxy-ca.key /etc/kubernetes/pki
    scp -r root@<master0-ip-address>:/etc/kubernetes/pki/etcd /etc/kubernetes/pki
  2. 重複master0的操作,生成config.yaml,運行kubeadm.

安裝CNI網絡

這裏跟上面<podCIDR>那裏設置的要對應起來。我這裏選用的是Flannel,執行下面命令。
官網詳解Installing a pod network

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml

加入node節點

在每臺node機器上執行以下格式的命令,在master節點執行完kubeadm init後會生成下面命令,複製執行就好。
這裏統一將node加入到master0管理中。

kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

完了可以使用kubectl get nodes查看集羣是否安裝完成。

本文轉自SegmentFault-kubernetes1.9高可用集羣安裝(使用kubeadm工具)

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章