Kubernetes單節點二進制線網部署(實例!!!)

本篇類容

1.官方提供的三種部署方式
2. Kubernetes 平臺環境規劃
3.自籤SSL證書
4. Etcd數據庫集羣部署
5. Node節點安裝Docker
6. Flannel容器集羣網絡部署
7.部署Master組件
8.部署Node組件


官方提供的三種部署方式

  • minikube

    Minikube是一個工具,可以在本地快速運行-一個單點的Kubernetes,僅用子嘗試Kubemnetes或日常開發的用戶使用。部署地址: htps://kubernetese io/docs/setup/minikube/

  • kubeadm

    Kubeadm也是一個工具,揭供kubeadm init和ukubeadm join,用於快速部署Kubermnetes集羣,部署地址:htpst/:/ubee/es.cs/do/s/cference/scetup tos/kubedm/kubeadm/

  • 二進制包

    推薦,從官方下載發打版的二進制包,手動部署每個組件,組成Kubermetes集羣。 下載地址:htpts//github.com/kubemetes/kuberetes/teleases

Kubernetes平臺環境規劃

  • 單Master集羣架構圖
    在這裏插入圖片描述

  • 多Master集羣架構圖
    在這裏插入圖片描述

自籤SSL證書

組件 使用的證書
etcd capem, server.pem, server-key.pem
flannel ca.pem,server.pem, server-key.pem
kube-apiserver ca.pem. server.pem. server-key.pem
kubelet ca.pem, ca-key.pem
kube-proxy ca.pem, kube-proxy pem, kube-proxy-key.pem
kubectl ca.pem, admin.pem, admin-key.pem

Etcd數據庫集羣部署

在這裏插入圖片描述

etcd簡介

etcd是CoreOS團隊於2013年6月發起的開源項目,它的目標是構建一個高可用的分佈式鍵值(key-value)數據庫。etcd內部採用raft協議作爲一致性算法,etcd基於Go語言實現。

  • etcd作爲服務發現系統,有以下的特點:

    簡單:安裝配置簡單,而且提供了HTTP API進行交互,使用也很簡單
    安全:支持SSL證書驗證
    快速:根據官方提供的benchmark數據,單實例支持每秒2k+讀操作
    可靠:採用raft算法,實現分佈式系統數據的可用性和一致性

Etcd三大支柱

  • 一個強一致性、高可用的服務存儲目錄。
    基於Ralf算法的etcd天生就是這樣一個強一致性、高可用的服務存儲目錄。

  • 一種註冊服務和健康服務健康狀況的機制。
    用戶可以在etcd中註冊服務,並且對註冊的服務配置key TTL,定時保持服務的心跳以達到監控健康狀態的效果。

  • 一種查找和連接服務的機制。
    通過在etcd指定的主題下注冊的服務業能在對應的主題下查找到。爲了確保連接,我們可以在每個服務機器上都部署一個proxy模式的etcd,這樣就可以確保訪問etcd集羣的服務都能夠互相連接。

Etcd部署方式

  • 二進制包下載地址

    https://github.com/etcd-io/etcd/releases

  • 查看集羣狀態

/opt/etcd/bin/etcdctl \
--a-file=ca.pem -crt-file=server.pem --key-file= server-key.pem \
--endpoints=*https://192.168.0.x:2379.https://192.168.0.x:2379,https://192.168.0x:2379" \
cluster-health

Node安裝Docker

在這裏插入圖片描述



實例演示

環境部署

主機 需要安裝的軟件
master(192.168.142.129/24) kube-apiserver、kube-controller-manager、kube-scheduler、etcd
node01(192.168.142.130/24) kubelet、kube-proxy、docker、flannel、etcd
node02(192.168.142.131/24) kubelet、kube-proxy、docker 、flannel 、etcd

k8s官網地址,點擊獲取噢!

在這裏插入圖片描述
在這裏插入圖片描述


ETCD 二進制包地址,點擊即可獲取噢!

在這裏插入圖片描述
在這裏插入圖片描述

將上述的壓縮包複製到centos7的下面即將創建的k8s目錄中

資源包鏈接:

https://pan.baidu.com/s/1QGvhsAVmv2SmbrWMGc3Bng 

提取碼:mlh4

一、Etcd數據庫集羣部署

1.在master端的操作

mkdir k8s
cd k8s/
mkdir etcd-cert
mv etcd-cert.sh etcd-cert
  • 編輯腳本下載cfssl官方包
vim cfssl.sh

curl -L https:#pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https:#pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https:#pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo

chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
  • 執行腳本下載cfssl官方包
bash cfssl.sh
cfssl 生成證書工具   
cfssljson	通過傳入json文件生成證書
cfssl-certinfo	查看證書信息
cd etcd-cert/
  • 定義ca證書
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"     
        ]  
      } 
    }         
  }
}
EOF
  • 實現證書籤名
cat > ca-csr.json <<EOF 
{   
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF
  • 生產證書,生成ca-key.pem、ca.pem
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
  • 指定etcd三個節點之間的通信驗證
cat > server-csr.json <<EOF
{
    "CN": "etcd",
    "hosts": [
    "192.168.142.129",
    "192.168.142.130",
    "192.168.142.131"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
EOF
  • 生成ETCD證書 server-key.pem server.pem
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
  • 解壓ETCD 二進制包
tar zxvf etcd-v3.3.10-linux-amd64.tar.gz
  • 配置文件,命令文件,證書
mkdir /opt/etcd/{cfg,bin,ssl} -p    
mv etcd-v3.3.10-linux-amd64/etcd etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin/
  • 證書拷貝
cp etcd-cert/*.pem /opt/etcd/ssl/
  • 進入卡住狀態等待其他節點加入(k8s目錄)
bash etcd.sh etcd01 192.168.142.129 etcd02=https:#192.168.142.130:2380,etcd03=https:#192.168.142.131:2380
  • 使用另外一個會話打開,會發現etcd進程已經開啓
ps -ef | grep etcd
  • 拷貝證書去其他節點
scp -r /opt/etcd/ [email protected]:/opt/
scp -r /opt/etcd/ [email protected]:/opt/
  • 啓動腳本拷貝其他節點
scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/

2.在node01節點的操作

  • 修改etcd文件
vim /opt/etcd/cfg/etcd
  • 修改名稱和地址
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https:#192.168.142.130:2380"
ETCD_LISTEN_CLIENT_URLS="https:#192.168.142.130:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https:#192.168.142.130:2380"
ETCD_ADVERTISE_CLIENT_URLS="https:#192.168.142.130:2379"
ETCD_INITIAL_CLUSTER="etcd01=https:#192.168.142.129:2380,etcd02=https:#192.168.142.130:2380,etcd03=https:#192.168.142.131:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
  • 啓動服務
systemctl start etcd
systemctl status etcd

3.在node02節點的操作

  • 修改etcd文件
vim /opt/etcd/cfg/etcd
  • 修改名稱和地址
#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https:#192.168.142.131:2380"
ETCD_LISTEN_CLIENT_URLS="https:#192.168.142.131:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https:#192.168.142.131:2380"
ETCD_ADVERTISE_CLIENT_URLS="https:#192.168.142.131:2379"
ETCD_INITIAL_CLUSTER="etcd01=https:#192.168.142.129:2380,etcd02=https:#192.168.142.130:2380,etcd03=https:#192.168.142.131:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
  • 啓動服務
systemctl start etcd
systemctl status etcd

4.在master端檢查羣集狀態(k8s/etcd-cert/目錄)

/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https:#192.168.142.129:2379,https:#192.168.142.130:2379,https:#192.168.142.131:2379" cluster-health
member 3eae9a550e2e3ec is healthy: got healthy result from https:#192.168.142.129:2379
member 26cd4dcf17bc5cbd is healthy: got healthy result from https:#192.168.142.130:2379
member 2fcd2df8a9411750 is healthy: got healthy result from https:#192.168.142.131:2379
cluster is healthy


二、 Node節點安裝Docker

#安裝依賴包
yum install yum-utils device-mapper-persistent-data lvm2 -y

#設置阿里雲鏡像源
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

#安裝Docker-ce
yum install -y docker-ce

#關閉防火牆及增強型安全功能
systemctl stop firewalld.service
setenforce 0

#啓動Docker並設置爲開機自啓動
systemctl start docker.service
systemctl enable docker.service

#檢查相關進程開啓情況
ps aux | grep docker

#重載守護進程
systemctl daemon-reload

#重啓服務
systemctl restart docker


三、Flannel容器集羣網絡部署

  • master端寫入分配的子網段到ETCD中,供flannel使用(k8s/etcd-cert/目錄)
/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.142.129:2379,https://192.168.142.130:2379,https://192.168.142.131:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
  • 查看寫入的信息
/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.142.129:2379,https://192.168.142.130:2379,https://192.168.142.131:2379" get /coreos.com/network/config
  • 拷貝到所有node節點(只需要部署在node節點即可)
cd /root/k8s
scp flannel-v0.10.0-linux-amd64.tar.gz [email protected]:/root
scp flannel-v0.10.0-linux-amd64.tar.gz [email protected]:/root
  • 所有node節點操作解壓
tar zxvf flannel-v0.10.0-linux-amd64.tar.gz 
  • node節點建立k8s工作目錄
mkdir /opt/kubernetes/{cfg,bin,ssl} -p
mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/


vim flannel.sh

#!/bin/bash

ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}

cat <<EOF >/opt/kubernetes/cfg/flanneld

FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"

EOF

cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target

EOF

systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
  • 開啓flannel網絡功能
bash flannel.sh https://[email protected]:2379,https://[email protected]:2379,https://[email protected]:2379
  • 配置docker連接flannel
vim /usr/lib/systemd/system/docker.service

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
#14行的準啓動前插入以下條目
EnvironmentFile=/run/flannel/subnet.env
#引用參數$DOCKER_NETWORK_OPTIONS
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:# --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

#查看網絡信息
cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.15.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
#說明:bip指定啓動時的子網
DOCKER_NETWORK_OPTIONS=" --bip=172.17.15.1/24 --ip-masq=false --mtu=1450"
  • 重啓docker服務
systemctl daemon-reload
systemctl restart docker
  • 查看flannel網絡信息
[root@localhost ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu1500
        inet 172.17.56.1  netmask 255.255.255.0broadcast 172.17.56.255
        ether 02:42:74:32:33:e3  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.142.130  netmask 255.255.255.0  broadcast 192.168.142.255
        inet6 fe80::8cb8:16f4:91a1:28d5  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:04:f1:1f  txqueuelen 1000 (Ethernet)
        RX packets 436817  bytes 153162687 (146.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 375079  bytes 47462997 (45.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.17.56.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::249c:c8ff:fec0:4baf  prefixlen 64  scopeid 0x20<link>
        ether 26:9c:c8:c0:4b:af  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 26 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1  (Local Loopback)
        RX packets 1915  bytes 117267 (114.5 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1915  bytes 117267 (114.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:61:63:f2  txqueuelen 1000 (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
  • 測試ping通對方docker0網卡 證明flannel起到路由作用
docker run -it centos:7 /bin/bash

yum install net-tools -y
  • 查看容器內的flannel網絡信息
[root@5f9a65565b53 /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
        inet 172.17.56.2  netmask 255.255.255.0broadcast 172.17.56.255
        ether 02:42:ac:11:38:02  txqueuelen 0  (Ethernet)
        RX packets 15632  bytes 13894772 (13.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 7987  bytes 435819 (425.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

  • 再次測試ping通兩個node中的centos:7容器
[root@f1e937618b50 /]# ping 172.17.15.2
PING 172.17.15.2 (172.17.15.2) 56(84) bytes of data.
64 bytes from 172.17.15.2: icmp_seq=1 ttl=62 time=0.420 ms
64 bytes from 172.17.15.2: icmp_seq=2 ttl=62 time=0.302 ms
64 bytes from 172.17.15.2: icmp_seq=3 ttl=62 time=0.420 ms
64 bytes from 172.17.15.2: icmp_seq=4 ttl=62 time=0.364 ms
64 bytes from 172.17.15.2: icmp_seq=5 ttl=62 time=0.114 ms

四、部署Master組件

1.自籤APIServer證書

  • 建立apiserver站點
cd k8s/
unzip master.zip
mkdir /opt/kubernetes/{cfg,bin,ssl} -p
#apiserver自簽證書目錄
mkdir apiserver
cd apiserver/
  • 建立ca證書
#定義ca證書,生成ca證書配置文件
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

#生成證書籤名文件
cat > ca-csr.json << EOF
{
    "CN": "kubernetes",
    "key": {
         "algo": "rsa",
         "size": 2048
    },
    "names": [
       {
              "C": "CN",
              "L": "Beijing",
              "ST": "Beijing",
              "O": "k8s",
              "OU": "System"
       }
    ]
}
EOF

#證書籤名(生成ca.pem ca-key.pem)
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
  • 建立apiserver通信證書
#定義apiserver證書,生成apiserver證書配置文件
cat > server-csr.json <<EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "192.168.142.129",   #master1
      "192.168.142.120",   #master2(後期做雙節點)
      "192.168.142.20",    #vip
      "192.168.142.140",   #lb nginx負載均衡(master)
      "192.168.142.150",   #lb (backup)
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

#證書籤名(生成server.pem server-key.pem)
cfssl gencert -ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes server-csr.json | cfssljson -bare server
  • 建立admin證書
#定義admin證書
cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

#證書籤名(生成admin.pem admin-key.epm)
cfssl gencert -ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes admin-csr.json | cfssljson -bare admin
  • 建立kube-proxy證書
#定義kube-proxy證書
cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

#證書籤名(生成kube-proxy.pem kube-proxy-key.pem)
cfssl gencert -ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
  • 執行腳本並複製
bash k8s-cert.sh
cp -p *.pem /opt/kubernetes/ssl/
  • 複製啓動命令
cd ..
tar zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin/
cp -p kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/
  • 創建token文件
cd /opt/kubernetes/cfg
#生成隨機的令牌
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
cat > token.csv << EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
  • 創建apiserver啓動腳本
vim apiserver.sh

#!/bin/bash

MASTER_ADDRESS=$1
ETCD_SERVERS=$2

cat <<EOF >/opt/kubernetes/cfg/kube-apiserver

KUBE_APISERVER_OPTS="--logtostderr=true \\
--v=4 \\
--etcd-servers=${ETCD_SERVERS} \\
--bind-address=${MASTER_ADDRESS} \\
--secure-port=6443 \\
--advertise-address=${MASTER_ADDRESS} \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--kubelet-https=true \\
--enable-bootstrap-token-auth \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-50000 \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF

cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF
  • 執行apiserver啓動腳本
bash apiserver.sh 192.168.142.129 https://192.168.142.129:2379,https://192.168.142.130:2379,https://192.168.142.131:2379
  • 查看vapiserver配置文件
cat <<EOF >/opt/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.142.129:2379,https://192.168.142.130:2379,https://192.168.142.131:2379 \
--bind-address=192.168.142.129 \
--secure-port=6443 \
--advertise-address=192.168.142.129 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
  • 啓動apiserver服務
systemctl daemon-reload
systemctl start kube-apiserver
systemctl status kube-apiserver
systemctl enable kube-apiserver
#查看服務端口狀況
ps aux | grep kube   

2.部署Controller-Manager服務

  • 移動控制命令
cd /k8s/kubernetes/server/bin

#移動腳本
cp -p kube-controller-manager /opt/kubernetes/bin/
  • 編寫kube-controller-manager配置文件
cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem"
EOF
  • 編寫kube-controller-manager啓動腳本
cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https:#github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF
  • 提權並啓動服務
chmod +x /usr/lib/systemd/system/kube-controller-manager.service
systemctl start kube-controller-manager
systemctl status kube-controller-manager
systemctl enable kube-controller-manager
#查看服務端口狀況
netstat -atnp | grep kube-controll

3.部署Scheruler服務

  • 移動控制命令
cd /k8s/kubernetes/server/bin

#移動腳本
cp -p kube-scheduler /opt/kubernetes/bin/
  • 編寫配置文件
cat <<EOF >/opt/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect"
EOF
  • 編寫啓動腳本
cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https:#github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF
  • 啓動服務
chmod +x /usr/lib/systemd/system/kube-scheduler.service
systemctl daemon-reload
systemctl start kube-scheduler
systemctl status kube-scheduler
systemctl enable kube-scheduler
#查看服務端口狀況
netstat -atnp | grep schedule
  • 查看master節點狀態
/opt/kubernetes/bin/kubectl get cs
#成功部署應該全部爲healthy
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}
etcd-1               Healthy   {"health":"true"}
etcd-2               Healthy   {"health":"true"}

4.部署Kubelet&kube-proxy

  • 傳送控制命令
cd kubernetes/server/bin

#向node節點推送控制命令
scp -p kubelet kube-proxy [email protected]:/opt/kubernetes/bin/
scp -p kubelet kube-proxy [email protected]:/opt/kubernetes/bin/
  • 創建bootstrap.kubeconfig
cd /root/k8s/kubernetes/

#指定api入口,指自身即可(必須安裝了apiserver)
export KUBE_APISERVER="https:#192.168.142.129:6443"

#設置集羣
/opt/kubernetes/bin/kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=/k8s/kubeconfig/bootstrap.kubeconfig

#設置客戶端認證
/opt/kubernetes/bin/kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=/k8s/kubeconfig/bootstrap.kubeconfig

#設置上下文參數
/opt/kubernetes/bin/kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=/k8s/kubeconfig/bootstrap.kubeconfig

#設置默認上下文
/opt/kubernetes/bin/kubectl config use-context default \
--kubeconfig=/k8s/kubeconfig/bootstrap.kubeconfig
  • 創建kube-proxy kubeconfig文件
#設置集羣
/opt/kubernetes/bin/kubectl config set-cluster kubernetes \
--certificate-authority=/opt/etcd/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=/k8s/kubeconfig/kube-proxy.kubeconfig

#設置客戶端認證
/opt/kubernetes/bin/kubectl config set-credentials kube-proxy \
--client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \
--client-key=/opt/kubernetes/ssl/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=/k8s/kubeconfig/kube-proxy.kubeconfig

#設置上下文參數
/opt/kubernetes/bin/kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=/k8s/kubeconfig/kube-proxy.kubeconfig

#設置默認上下文
/opt/kubernetes/bin/kubectl config use-context default \
--kubeconfig=/k8s/kubeconfig/kube-proxy.kubeconfig
  • 向node節點kubeconfig文件推送
scp bootstrap.kubeconfig kube-proxy.kubeconfig [email protected]:/opt/kubernetes/cfg/
scp bootstrap.kubeconfig kube-proxy.kubeconfig [email protected]:/opt/kubernetes/cfg/
  • 將kubectl寫入環境變量
echo "export PATH=\$PATH:/opt/kubernetes/bin/" >> /etc/profile
source /etc/profile
  • 創建bootstrap角色權限
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

五、部署Node組件

1.安裝Kubelet

  • 創建kubelet配置文件
cat <<EOF >/opt/kubernetes/cfg/kubelet

KUBELET_OPTS="--logtostderr=true \\
--v=4 \\
--hostname-override=${NODE_ADDRESS} \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet.config \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
EOF


cat <<EOF >/opt/kubernetes/cfg/kubelet.config

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: ${NODE_ADDRESS}
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- ${DNS_SERVER_IP} 
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true
EOF
  • 創建kubelet啓動腳本
cat <<EOF >/usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target
EOF
  • 啓動服務
chmod +x /usr/lib/systemd/system/kubelet.service
systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
  • 在master端驗證
#檢查簽名請求
kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-NOI-9vufTLIqJgMWq4fHPNPHKbjCXlDGHptj7FqTa8A   55s     kubelet-bootstrap   Approved,Issued
  • 同意請求並頒發證書
kubectl certificate approve node-csr-NOI-9vufTLIqJgMWq4fHPNPHKbjCXlDGHptj7FqTa8A
  • 查看集羣情況
kubectl get nodes

2.安裝kube-proxy

  • 建立kube-proxy配置文件
cat <<EOF >/opt/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \\
--v=4 \\
--hostname-override=192.168.142.130 \\
--cluster-cidr=10.0.0.0/24 \\
--proxy-mode=ipvs \\
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
EOF
  • 建立kube-proxy啓動腳本
cat <<EOF >/usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF
  • 啓動服務
chmod +x /usr/lib/systemd/system/kube-proxy.service
systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy
#查看服務端口狀況
netstat -atnp | grep proxy

謝謝閱讀!

發佈了95 篇原創文章 · 獲贊 7 · 訪問量 6262
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章