k8s基於二進制安裝方法安裝kubernetes1.19版本(二)

基礎環境規劃

實驗環境規劃:
操作系統:centos7.6
虛擬機配置:
配置: 4Gib內存/4vCPU/100G硬盤
網絡:NAT
開啓虛擬機的虛擬化:
集羣環境:
版本:kubernetes1.20
Pod網段: 10.0.0.0/16
Service網段: 10.255.0.0/16

K8S集羣角色 IP 主機名 安裝的組件
Master1 192.168.7.10 k8s-master1 apiserver、controller-manager、scheduler、etcd、docker
Master2 192.168.7.11 k8s-master2 apiserver、controller-manager、scheduler、etcd、docker
Master3 192.168.7.12 k8s-master3 apiserver、controller-manager、scheduler、etcd、docker
Node1 192.168.7.13 k8s-node1 kubelet、kube-proxy、docker、calico、cordns
Node2 192.168.7.16 k8s-node2 kubelet、kube-proxy、docker、calico、cordns
VIP 192.168.7.50 控制節點的vip地址

kubeadm和二進制安裝的區別

k8s適用場景分析
kubeadm是官方提供的開源工具,是一個開源項目,用於快速搭建kubernetes集羣,目前是比較方便和推薦使用的。kubeadm init 以及 kubeadm join 這兩個命令可以快速創建 kubernetes 集羣。Kubeadm初始化k8s,所有的組件都是以pod形式運行的,具備故障自恢復能力。
kubeadm是工具,可以快速搭建集羣,也就是相當於用程序腳本幫我們裝好了集羣,屬於自動部署,簡化部署操作,自動部署屏蔽了很多細節,使得對各個模塊感知很少,如果對k8s架構組件理解不深的話,遇到問題比較難排查。
kubeadm適合需要經常部署k8s,或者對自動化要求比較高的場景下使用。

二進制:在官網下載相關組件的二進制包,如果手動安裝,對kubernetes理解也會更全面。
Kubeadm和二進制都適合生產環境,在生產環境運行都很穩定,具體如何選擇,可以根據實際項目進行評估。

1、K8S集羣環境安裝

1.1 初始化環境(所有節點)

1.1.1 配置主機名

配置主機名:所有主機都需要配置
在192.168.7.10上執行如下:
hostnamectl set-hostname k8s-master1 && bash

1.1.2 配置hosts文件

修改每臺機器的/etc/hosts文件,增加如下幾行:

[root@k8s-master1 ~]# vim /etc/hosts
#k8s
127.0.0.1 k8s-master1 master1
192.168.7.10 k8s-master1 master1
192.168.7.11 k8s-node1 node1
192.168.7.12 k8s-node2 node2
192.168.7.13 k8s-node3 node3

1.1.3 配置主機之間無密碼登錄

生成ssh 密鑰對
[root@k8s-master1 ~]# ssh-keygen -t rsa
一路回車,不輸入密碼
把本地的ssh公鑰文件安裝到遠程主機對應的賬戶
[root@k8s-master1 ~]# ssh-copy-id -i .ssh/id_rsa.pub k8s-master1
[root@k8s-master1 ~]# ssh-copy-id -i .ssh/id_rsa.pub k8s-node1

1.1.4 關閉firewalld防火牆

[root@k8s-master1 ~]#  systemctl stop firewalld ; systemctl disable firewalld

1.1.5 關閉selinux

[root@k8s-master1~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
#修改selinux配置文件之後,重啓機器,selinux才能永久生效
[root@k8s-master1 ~]# getenforce
Disabled

1.1.6 關閉交換分區swap

[root@k8s-master1 ~]# swapoff -a
[root@k8s-node1 ~]# swapoff -a
永久關閉:註釋swap掛載,給swap這行開頭加一下注釋
#刪除UUID
[root@k8s-master1 ~]# vim /etc/fstab
#/dev/mapper/centos-swap swap      swap    defaults        0 0

1.1.7 修改內核參數

[root@k8s-master1 ~]#  modprobe br_netfilter
[root@k8s-master1 ~]#  echo "modprobe br_netfilter" >> /etc/profile
[root@k8s-master1~]#  cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
[root@k8s-master1 ~]# sysctl -p /etc/sysctl.d/k8s.conf

1.1.8 配置阿里雲repo源

安裝rzsz、scp、yum-utils命令
[root@k8s-master1]# yum install lrzsz openssh-clients -y
備份基礎repo源
[root@k8s-master1 ~]# mkdir -p /etc/yum.repos.d/repo.bak && cd /etc/yum.repos.d/ && mv *  /etc/yum.repos.d/repo.bak
#下載阿里雲的repo源

#配置國內阿里雲docker的repo源
[root@k8s-master1 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
#配置epel源
把epel.repo上傳到k8s-master1的/etc/yum.repos.d目錄下,然後再遠程拷貝到k8s-node1節點。
[root@k8s-master1 ~]# scp /etc/yum.repos.d/epel.repo k8s-node1:/etc/yum.repos.d/

1.1.9 配置時間同步

[root@k8s-master1 ~]# yum install ntpdate -y
[root@k8s-master1 ~]#  ntpdate cn.pool.ntp.org
#把時間同步做成計劃任務
[root@k8s-master1 ~]# crontab -e
* */1 * * * /usr/sbin/ntpdate   cn.pool.ntp.org
#重啓crond服務
[root@k8s-master1 ~]# service crond restart
[root@k8s-node1 ~]# yum install ntpdate -y
[root@k8s-node1~]#  ntpdate cn.pool.ntp.org
#把時間同步做成計劃任務
[root@k8s-node1 ~]# crontab -e
* */1 * * * /usr/sbin/ntpdate   cn.pool.ntp.org
#重啓crond服務
[root@k8s-node1 ~]# service crond restart

1.1.10 安裝iptables

如果用firewalld不是很習慣,可以安裝iptables 
[root@k8s-master1 ~]# yum install iptables-services -y
禁用iptables
[root@k8s-master1 ~]# service iptables stop   && systemctl disable iptables
清空防火牆規則
[root@k8s-master1 ~]# iptables -F

1.1.11 開啓ipvs

不開啓ipvs將會使用iptables進行數據包轉發,但是效率低,所以官網推薦需要開通ipvs。
把ipvs.modules上傳到k8s-master1機器的/etc/sysconfig/modules/目錄下

[root@k8s-master1# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
ip_vs_ftp              13079  0 
nf_nat                 26583  1 ip_vs_ftp
ip_vs_sed              12519  0 
ip_vs_nq               12516  0 
ip_vs_sh               12688  0 
ip_vs_dh               12688  0 
[root@k8s-master1~]# scp /etc/sysconfig/modules/ipvs.modules k8s-node1:/etc/sysconfig/modules/
[root@k8s-node1]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
ip_vs_ftp              13079  0 
nf_nat                 26583  1 ip_vs_ftp
ip_vs_sed              12519  0 
ip_vs_nq               12516  0 
ip_vs_sh               12688  0 
ip_vs_dh               12688  0

1.1.12 安裝基礎軟件包

[root@k8s-master1]# yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel  python-devel epel-release openssh-server socat  ipvsadm conntrack ntpdate

2、安裝Docker環境

2.1 安裝docker-ce

注意:二進制安裝時控制節點master可以不用安裝docker,因爲控制節點不允許pod調度到控制節點
如果是kubeadm方式安裝,控制節點和工作節點都需要安裝docker,因爲kubeadm方式安裝控制節點的組件是以pod運行的,只要有pod他就需要基於容器引擎啓動容器
我這裏還是在所有節點都安裝一下docker

[root@k8s-master1~]#  yum install docker-ce docker-ce-cli containerd.io -y 
[root@k8s-master1~]# systemctl start docker && systemctl enable docker.service

2.2 配置docker鏡像加速器

[root@k8s-master1~]# tee /etc/docker/daemon.json << 'EOF'
{
 "registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com", "https://rncxm540.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
} 
EOF
[root@k8s-master1~]# systemctl daemon-reload && systemctl restart docker && systemctl status docker
   Active: active (running) since Sun 2021-09-19 19:34:09 CST; 9ms ago
#修改docker文件驅動爲systemd,默認爲cgroupfs,kubelet默認使用systemd,兩者必須一致纔可以。

3、搭建etcd集羣

3.1 配置etcd工作目錄

創建配置文件目錄和證書文件存放目錄

[root@k8s-master1 ~]# mkdir /etc/etcd && mkdir -p /etc/etcd/ssl

3.2 安裝簽發證書工具cfssl

[root@k8s-master1 ~]# mkdir  /root/k8s-hxu
#把cfssl-certinfo_linux-amd64  cfssljson_linux-amd64  cfssl_linux-amd64上傳到目錄
[root@k8s-master1  k8s-hxu]# ls
cfssl-certinfo_linux-amd64  cfssljson_linux-amd64  cfssl_linux-amd64
#把文件變成可執行權限
[root@k8s-master1 k8s-hxu]# chmod +x *
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

3.3 配置ca證書

#生成ca證書請求文件

[root@k8s-master1 ]# cd /root/k8s-hxu/
[root@k8s-master1 k8s-hxu]# vim ca-csr.json 
{
  "CN": "kubernetes",
  "key": {
      "algo": "rsa",
      "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "bj",
      "O": "k8s",
      "OU": "system"
    }
  ],
  "ca": {
          "expiry": "8760h"
  }
}
#生成ca證書json文件
[root@k8s-master1 k8s-hxu]# vim ca-config.json 
{
  "signing": {
      "default": {
          "expiry": "8760h"
        },
      "profiles": {
          "kubernetes": {
              "usages": [
                  "signing",
                  "key encipherment",
                  "server auth",
                  "client auth"
              ],
              "expiry": "8760h"
          }
      }
  }
}
[root@k8s-master1 k8s-hxu]# cfssl gencert -initca ca-csr.json  | cfssljson -bare ca
注: CN:Common Name,kube-apiserver 從證書中提取該字段作爲請求的用戶名 (User Name);瀏覽器使用該字段驗證網站是否合法; O:Organization,kube-apiserver 從證書中提取該字段作爲請求用戶所屬的組 (Group)

即如果如下圖
image
導出證書文件都已經生成了,下面爲etcd生成證書

3.4 生成etcd證書

配置etcd證書請求,hosts的ip變成自己master節點的ip

[root@k8s-master1 k8s-hxu]# vim etcd-csr.json
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "192.168.7.10",
    "192.168.7.11",
    "192.168.7.12",  # 10,11,12地址爲所有etcd節點的集羣內部通信IP,可以預留幾個,做擴容用
    "192.168.7.50"   # etcd的vip地址
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [{
    "C": "CN",
    "ST": "Beijing",
    "L": "bj",
    "O": "k8s",
    "OU": "system"
  }]
}

[root@k8s-master1 k8s-hxu]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson  -bare etcd
\# 會生成3個etcd的證書文件
[root@k8s-master1 k8s-hxu]# ll
-rw-r--r-- 1 root root 1.1K Sep 19 21:17 etcd.csr
-rw------- 1 root root 1.7K Sep 19 21:17 etcd-key.pem
-rw-r--r-- 1 root root 1.4K Sep 19 21:17 etcd.pem

3.5 部署etcd集羣

把etcd-v3.4.13-linux-amd64.tar.gz上傳到k8s-master1節點的/root/k8s-hxu目錄下

[root@k8s-master1 k8s-hxu]# tar -xf etcd-v3.4.13-linux-amd64.tar.gz
[root@k8s-master1 k8s-hxu]# cp -p etcd-v3.4.13-linux-amd64/etcd* /usr/local/bin/
把這兩個二進制程序文件拷貝到master2和master3的/usr/local/bin/目錄下
[root@k8s-master1 etcd-v3.4.13-linux-amd64]# scp -p /usr/local/bin/etcd* master2:/usr/local/bin/
[root@k8s-master1 etcd-v3.4.13-linux-amd64]# scp -p /usr/local/bin/etcd* master3:/usr/local/bin/
#創建配置文件
[root@k8s-master1 k8s-hxu]#cd /root/k8s-hxu/
[root@k8s-master1 k8s-hxu]# vim etcd.conf 
#[Member]
ETCD_NAME="etcd1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.7.10:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.7.10:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.7.10:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.7.10:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.7.10:2380,etcd2=https://192.168.7.11:2380,etcd3=https://192.168.7.12:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#注:
ETCD_NAME:節點名稱,集羣中唯一 
ETCD_DATA_DIR:數據目錄 
ETCD_LISTEN_PEER_URLS:集羣通信監聽地址 
ETCD_LISTEN_CLIENT_URLS:客戶端訪問監聽地址 
ETCD_INITIAL_ADVERTISE_PEER_URLS:集羣通告地址 
ETCD_ADVERTISE_CLIENT_URLS:客戶端通告地址 
ETCD_INITIAL_CLUSTER:集羣節點地址
ETCD_INITIAL_CLUSTER_TOKEN:集羣Token
ETCD_INITIAL_CLUSTER_STATE:加入集羣的當前狀態,new是新集羣,existing表示加入已有集羣

#創建啓動服務文件
[root@k8s-master1 k8s-hxu]# vim etcd.service 
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
 
[Service]
Type=notify
EnvironmentFile=-/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem \
  --trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --peer-cert-file=/etc/etcd/ssl/etcd.pem \
  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \
  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --peer-client-cert-auth \
  --client-cert-auth
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target
# 下面開始拷貝etcd的證書到上面編輯的啓動文件中定義的目錄下
[root@k8s-master1 k8s-hxu]# ll /etc/etcd/ssl/
total 0
[root@k8s-master1 k8s-hxu]# cp ca*.pem /etc/etcd/ssl/
[root@k8s-master1 k8s-hxu]# cp etcd*.pem  /etc/etcd/ssl/
[root@k8s-master1 k8s-hxu]# ll /etc/etcd/ssl/
-rw------- 1 root root 1.7K Sep 19 21:39 ca-key.pem
-rw-r--r-- 1 root root 1.4K Sep 19 21:39 ca.pem
-rw------- 1 root root 1.7K Sep 19 21:39 etcd-key.pem
-rw-r--r-- 1 root root 1.4K Sep 19 21:39 etcd.pem
[root@k8s-master1 k8s-hxu]# cp etcd.conf /etc/etcd/
[root@k8s-master1 k8s-hxu]# ll /etc/etcd/
-rw-r--r-- 1 root root 519 Sep 19 21:40 etcd.conf
drwxr-xr-x 2 root root  74 Sep 19 21:39 ssl
[root@k8s-master1 k8s-hxu]# cp etcd.service /usr/lib/systemd/system/
# 把文件同步到master2和master3上
for i in k8s-master2 k8s-master3;do rsync -vaz etcd.conf $i:/etc/etcd/;done
for i in k8s-master2 k8s-master3;do rsync -vaz etcd*.pem ca*.pem $i:/etc/etcd/ssl/;done
for i in k8s-master2 k8s-master3;do rsync -vaz etcd.service $i:/usr/lib/systemd/system/;done

[root@k8s-master1 k8s-hxu]# mkdir -p /var/lib/etcd/default.etcd

啓動etcd集羣

[root@k8s-master1 k8s-hxu]# systemctl daemon-reload && systemctl enable etcd.service && systemctl start etcd.service && systemctl status etcd
Active: active (running) since Sun 2021-09-19 21:53:59 CST; 5ms ago

\#**注意**:啓動etcd的時候,先啓動k8s-master1的etcd服務,會一直卡住在啓動的狀態,然後接着再啓動k8s-master2的etcd,這樣k8s-master1這個節點etcd纔會正常起來

#查看etcd集羣
[root@k8s-master1 k8s-hxu]# ETCDCTL_API=3
[root@k8s-master1 k8s-hxu]# /usr/local/bin/etcdctl --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://192.168.7.10:2379,https://192.168.7.11:2379,https://192.168.7.12:2379  endpoint health
+---------------------------+--------+------------+-------+
|         ENDPOINT          | HEALTH |    TOOK    | ERROR |
+---------------------------+--------+------------+-------+
| https://192.168.7.12:2379 |   true | 7.823002ms |       |
| https://192.168.7.10:2379 |   true | 8.056305ms |       |
| https://192.168.7.11:2379 |   true | 20.08422ms |       |
+---------------------------+--------+------------+-------+

4、安裝kubernetes組件

二進制包所在的github地址如下:
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/

4.1 下載安裝包

把kubernetes-server-linux-amd64.tar.gz上傳到k8s-master1上的/data/work

[root@k8s-master1 k8s-hxu]# tar zxvf kubernetes-server-linux-amd64.tar.gz
[root@k8s-master1 k8s-hxu]# cd kubernetes/server/bin/
[root@k8s-master1 bin]# cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
[root@k8s-master1 bin]# #rsync -vaz kube-apiserver kube-controller-manager kube-scheduler kubectl k8s-master2:/usr/local/bin/
[root@k8s-master1 bin]# rsync -vaz kube-apiserver kube-controller-manager kube-scheduler kubectl k8s-master2:/usr/local/bin/
[root@k8s-master1 bin]# scp kubelet kube-proxy k8s-node1:/usr/local/bin/
[root@k8s-master1 bin]# cd /root/k8s-hxu/
[root@k8s-master1 k8s-hxu]# mkdir -p /etc/kubernetes/ssl && mkdir /var/log/kubernetes

二進制程序文件如下圖所示
image

4.2 部署api-server組件

啓動TLS Bootstrapping 機制

Master apiserver啓用TLS認證後,每個節點的 kubelet 組件都要使用由 apiserver 使用的 CA 簽發的有效證書才能與 apiserver 通訊,當Node節點很多時,這種客戶端證書頒發需要大量工作,同樣也會增加集羣擴展複雜度。

爲了簡化流程,Kubernetes引入了TLS bootstraping機制來自動頒發客戶端證書,kubelet會以一個低權限用戶自動向apiserver申請證書,kubelet的證書由apiserver動態簽署。

Bootstrap 是很多系統中都存在的程序,比如 Linux 的bootstrap,bootstrap 一般都是作爲預先配置在開啓或者系統啓動的時候加載,這可以用來生成一個指定環境。Kubernetes 的 kubelet 在啓動時同樣可以加載一個這樣的配置文件,這個文件的內容類似如下形式:

apiVersion: v1
clusters: null
contexts:
- context:
    cluster: kubernetes
    user: kubelet-bootstrap
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
  user: {}

#TLS bootstrapping 具體引導過程

  1. TLS 作用
    TLS 的作用就是對通訊加密,防止中間人竊聽;同時如果證書不信任的話根本就無法與 apiserver 建立連接,更不用提有沒有權限向apiserver請求指定內容。

  2. RBAC 作用
    當 TLS 解決了通訊問題後,那麼權限問題就應由 RBAC 解決(可以使用其他權限模型,如 ABAC);RBAC 中規定了一個用戶或者用戶組(subject)具有請求哪些 api 的權限;在配合 TLS 加密的時候,實際上 apiserver 讀取客戶端證書的 CN 字段作爲用戶名,讀取 O字段作爲用戶組.

以上說明:第一,想要與 apiserver 通訊就必須採用由 apiserver CA 簽發的證書,這樣才能形成信任關係,建立 TLS 連接;第二,可以通過證書的 CN、O 字段來提供 RBAC 所需的用戶與用戶組。

#kubelet 首次啓動流程
TLS bootstrapping 功能是讓 kubelet 組件去 apiserver 申請證書,然後用於連接 apiserver;那麼第一次啓動時沒有證書如何連接 apiserver ?

在apiserver 配置中指定了一個 token.csv 文件,該文件中是一個預設的用戶配置;同時該用戶的Token 和 由apiserver 的 CA簽發的用戶被寫入了 kubelet 所使用的 bootstrap.kubeconfig 配置文件中;這樣在首次請求時,kubelet 使用 bootstrap.kubeconfig 中被 apiserver CA 簽發證書時信任的用戶來與 apiserver 建立 TLS 通訊,使用 bootstrap.kubeconfig 中的用戶 Token 來向 apiserver 聲明自己的 RBAC 授權身份.

首次啓動時,可能與遇到 kubelet 報 401 無權訪問 apiserver 的錯誤;這是因爲在默認情況下,kubelet 通過 bootstrap.kubeconfig 中的預設用戶 Token 聲明瞭自己的身份,然後創建 CSR 請求;但是不要忘記這個用戶在我們不處理的情況下他沒任何權限的,包括創建 CSR 請求;所以需要創建一個 ClusterRoleBinding,將預設用戶 kubelet-bootstrap 與內置的 ClusterRole system:node-bootstrapper 綁定到一起,使其能夠發起 CSR 請求。稍後安裝kubelet的時候演示。

#創建token.csv文件

[root@k8s-master1 k8s-hxu]# 
/root/k8s-hxu
[root@k8s-master1 k8s-hxu]# cat > token.csv << EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

生成了一個文件,文件內容如下
#格式:token,用戶名,UID,用戶組
[root@k8s-master1 k8s-hxu]# cat token.csv
0758e774e990b936c217d2f6f8f676d4,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

#創建csr請求文件,替換爲自己機器的IP

[root@k8s-master1 k8s-hxu]# vim kube-apiserver-csr.json 
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "192.168.7.10",
    "192.168.7.11",
    "192.168.7.12",
    "192.168.7.13",
    "192.168.7.50",
    "10.255.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "bj",
      "O": "k8s",
      "OU": "system"
    }
  ]
}

#注: 如果 hosts 字段不爲空則需要指定授權使用該證書的 IP 或域名列表。 由於該證書後續被 kubernetes master 集羣使用,需要將master節點的IP都填上,同時還需要填寫 service 網絡的首個IP。(一般是 kube-apiserver 指定的 service-cluster-ip-range 網段的第一個IP,如 10.255.0.1)

#生成證書和token文件

[root@k8s-master1 k8s-hxu]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver
[root@k8s-master1 k8s-hxu]# ll -t
-rw-r--r-- 1 root      root      1.3K Sep 20 11:09 kube-apiserver.csr
-rw------- 1 root      root      1.7K Sep 20 11:09 kube-apiserver-key.pem
-rw-r--r-- 1 root      root      1.6K Sep 20 11:09 kube-apiserver.pem
[root@k8s-master1 k8s-hxu]# cat > token.csv << EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

#創建api-server的配置文件,替換成自己的ip

[root@k8s-master1 k8s-hxu]# vim kube-apiserver.conf 
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
  --anonymous-auth=false \
  --bind-address=192.168.7.10 \
  --secure-port=6443 \
  --advertise-address=192.168.7.10 \
  --insecure-port=0 \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all=true \
  --enable-bootstrap-token-auth \
  --service-cluster-ip-range=10.255.0.0/16 \
  --token-auth-file=/etc/kubernetes/token.csv \
  --service-node-port-range=30000-50000 \
  --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem  \
  --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \
  --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
  --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \
  --service-account-issuer=https://kubernetes.default.svc.cluster.local \
  --etcd-cafile=/etc/etcd/ssl/ca.pem \
  --etcd-certfile=/etc/etcd/ssl/etcd.pem \
  --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
  --etcd-servers=https://192.168.7.10:2379,https://192.168.7.11:2379,https://192.168.7.12:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --apiserver-count=3 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/log/kube-apiserver-audit.log \
  --event-ttl=1h \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=4"

# 注截:
--logtostderr:啓用日誌
--v:日誌等級
--log-dir:日誌目錄
--etcd-servers:etcd集羣地址
--bind-address:監聽地址
--secure-port:https安全端口
--advertise-address:集羣通告地址
--allow-privileged:啓用授權
--service-cluster-ip-range:Service虛擬IP地址段
--enable-admission-plugins:准入控制模塊
--authorization-mode:認證授權,啓用RBAC授權和節點自管理
--enable-bootstrap-token-auth:啓用TLS bootstrap機制
--token-auth-file:bootstrap token文件
--service-node-port-range:Service nodeport類型默認分配端口範圍
--kubelet-client-xxx:apiserver訪問kubelet客戶端證書
--tls-xxx-file:apiserver https證書
--etcd-xxxfile:連接Etcd集羣證書 –
-audit-log-xxx:審計日誌

#創建服務啓動文件

[root@k8s-master1 k8s-hxu]# vim kube-apiserver.service 
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service
[Service]
EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
[root@k8s-master1 k8s-hxu]# cp ca*.pem /etc/kubernetes/ssl/
[root@k8s-master1 k8s-hxu]# ll /etc/kubernetes/ssl/
-rw------- 1 root root 1.7K Sep 20 11:22 ca-key.pem
-rw-r--r-- 1 root root 1.4K Sep 20 11:22 ca.pem
[root@k8s-master1 k8s-hxu]# cp kube-apiserver*.pem /etc/kubernetes/ssl/
[root@k8s-master1 k8s-hxu]# ll /etc/kubernetes/ssl/
-rw------- 1 root root 1.7K Sep 20 11:22 ca-key.pem
-rw-r--r-- 1 root root 1.4K Sep 20 11:22 ca.pem
-rw------- 1 root root 1.7K Sep 20 11:22 kube-apiserver-key.pem
-rw-r--r-- 1 root root 1.6K Sep 20 11:22 kube-apiserver.pem
[root@k8s-master1 k8s-hxu]# cp token.csv /etc/kubernetes/
[root@k8s-master1 k8s-hxu]# cp kube-apiserver.conf /etc/kubernetes/
[root@k8s-master1 k8s-hxu]# cp kube-apiserver.service /usr/lib/systemd/system/
# 下面把這些生成的文件都拷貝到其他兩個master控制節點的對應目錄下(千萬不要漏)
rsync -vaz token.csv k8s-master2:/etc/kubernetes/
rsync -vaz token.csv k8s-master3:/etc/kubernetes/
rsync -vaz kube-apiserver*.pem k8s-master2:/etc/kubernetes/ssl/
rsync -vaz kube-apiserver*.pem k8s-master3:/etc/kubernetes/ssl/
rsync -vaz ca*.pem k8s-master2:/etc/kubernetes/ssl/
rsync -vaz ca*.pem k8s-master3:/etc/kubernetes/ssl/
rsync -vaz kube-apiserver.conf k8s-master2:/etc/kubernetes/
rsync -vaz kube-apiserver.conf k8s-master3:/etc/kubernetes/
rsync -vaz kube-apiserver.service k8s-master2:/usr/lib/systemd/system/
rsync -vaz kube-apiserver.service k8s-master3:/usr/lib/systemd/system/
# 注:k8s-master2和k8s-master3配置文件kube-apiserver.conf的IP地址修改爲實際的本機IP,修改以下兩個字段即可
--bind-address=192.168.7.12
--advertise-address=192.168.7.12
加載啓動文件並啓動
# systemctl daemon-reload && systemctl enable kube-apiserver && systemctl start kube-apiserver && systemctl status kube-apiserver

查看一下狀態,看到下面這個running則表示apiserver正常啓動了。
image

使用curl命令請求訪問一下

[root@k8s-master1 k8s-hxu]#  curl --insecure https://192.168.7.10:6443/
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}

上面看到401,這個是正常的的狀態,還沒認證

4.3 部署kubectl組件

Kubectl是客戶端工具,操作k8s資源的,如增刪改查等。
Kubectl操作資源的時候,怎麼知道連接到哪個集羣,需要一個文件/etc/kubernetes/admin.conf,kubectl會根據這個文件的配置,去訪問k8s資源。/etc/kubernetes/admin.con文件記錄了訪問的k8s集羣,和用到的證書。

可以設置一個環境變量KUBECONFIG
[root@ k8s-master1 ~]# export KUBECONFIG =/etc/kubernetes/admin.conf
這樣在操作kubectl,就會自動加載KUBECONFIG來操作要管理哪個集羣的k8s資源了

也可以按照下面方法,這個是在kubeadm初始化k8s的時候會告訴我們要用的一個方法
[root@ k8s-master1 ~]# cp /etc/kubernetes/admin.conf /root/.kube/config
這樣我們在執行kubectl,就會加載/root/.kube/config文件,去操作k8s資源了

如果設置了KUBECONFIG,那就會先找到KUBECONFIG去操作k8s,如果沒有KUBECONFIG變量,那就會使用/root/.kube/config文件決定管理哪個k8s集羣的資源

創建csr請求文件

{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "bj",
      "O": "system:masters",
      "OU": "system"
    }
  ]
}

#說明: 後續 kube-apiserver 使用 RBAC 對客戶端(如 kubelet、kube-proxy、Pod)請求進行授權; kube-apiserver 預定義了一些 RBAC 使用的 RoleBindings,如 cluster-admin 將 Group system:masters 與 Role cluster-admin 綁定,該 Role 授予了調用kube-apiserver 的所有 API的權限; O指定該證書的 Group 爲 system:masters,kubelet 使用該證書訪問 kube-apiserver 時 ,由於證書被 CA 簽名,所以認證通過,同時由於證書用戶組爲經過預授權的 system:masters,所以被授予訪問所有 API 的權限; 注: 這個admin 證書,是將來生成管理員用的kube config 配置文件用的,現在我們一般建議使用RBAC 來對kubernetes 進行角色權限控制, kubernetes 將證書中的CN 字段 作爲User, O 字段作爲 Group; "O": "system:masters", 必須是system:masters,否則後面kubectl create clusterrolebinding報錯。

[root@k8s-master1 k8s-hxu]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
2021/09/20 14:06:01 [INFO] generate received request
2021/09/20 14:06:01 [INFO] received CSR
2021/09/20 14:06:01 [INFO] generating key: rsa-2048
2021/09/20 14:06:02 [INFO] encoded CSR
2021/09/20 14:06:02 [INFO] signed certificate with serial number 345050993241817661147076199353241438925566273976
2021/09/20 14:06:02 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s-master1 k8s-hxu]# ll -t
total 320M
-rw-r--r-- 1 root      root      1001 Sep 20 14:06 admin.csr
-rw------- 1 root      root      1.7K Sep 20 14:06 admin-key.pem
-rw-r--r-- 1 root      root      1.4K Sep 20 14:06 admin.pem
[root@k8s-master1 k8s-hxu]# cp admin*.pem /etc/kubernetes/ssl/

#創建kubeconfig配置文件,比較重要
kubeconfig 爲 kubectl 的配置文件,包含訪問 apiserver 的所有信息,如 apiserver 地址、CA 證書和自身使用的證書(這裏如果報錯找不到kubeconfig路徑,請手動複製到相應路徑下,沒有則忽略)
1.設置集羣參數

[root@k8s-master1 k8s-hxu]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.7.10:6443 --kubeconfig=kube.config

2.設置客戶端認證參數

[root@k8s-master1 k8s-hxu]# kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config

image

3.設置上下文參數
在上面的截圖中可以看到contexts: null 字段是無信息的,下面配置上下文參數使他們之間建立連接(也就是admin用戶對應個集羣),有上下文之後再配置當前上下文(current-context: ""字段),他就會吧哪個用戶對應哪個集羣建立起連接了

[root@k8s-master1 k8s-hxu]# kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.configContext "kubernetes" created.
[root@k8s-master1 k8s-hxu]# cat kube.config
設置完之後再查看文件context字段就被加上了內容
............
contexts:
- context:
    cluster: kubernetes
    user: admin
  name: kubernetes

4.設置默認上下文,此時admin用戶就可以訪問kubernetes了

[root@k8s-master1 k8s-hxu]# kubectl config use-context kubernetes --kubeconfig=kube.config
Switched to context "kubernetes".
[root@k8s-master1 k8s-hxu]# cat kube.config
...
contexts:
- context:
    cluster: kubernetes
    user: admin
  name: kubernetes
current-context: kubernetes

我們在執行kubectl的時候就需要讓他加載kube.config文件,如何讓他加載此文件?執行以下操作,

[root@k8s-master1 k8s-hxu]# mkdir ~/.kube -p
#如果不拷貝config文件直接執行kubectl命令就會報以下錯誤
[root@k8s-master1 k8s-hxu]# kubectl get node
The connection to the server localhost:8080 was refused - did you specify the right host or port?
拷貝文件後再執行
[root@k8s-master1 k8s-hxu]# cp kube.config ~/.kube/config
[root@k8s-master1 k8s-hxu]# kubectl get node
No resources found

現在雖然可以查看資源但是還沒有全新可以創建相關的資源,下面給他授權讓它可以創建資源
5.授權kubernetes證書訪問kubelet api權限

[root@k8s-master1 k8s-hxu]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
註解:
# clusterrolebinding是無名稱空間限制的,對任何名稱空間都有效的
# 給clusterrolebinding起個名叫kube-apiserver:kubelet-apis
# 要把kubernetes這個用戶通過clusterrolebinding綁定到system:kubelet-api-admin這個集羣內置的一個clusterrole
6、把配置文件同樣拷貝到master2和master3上,先創建.kube目錄再拷貝
rsync -vaz /root/.kube/config k8s-master2:/root/.kube/
rsync -vaz /root/.kube/config k8s-master2:/root/.kube/
這樣在其它兩個master上也可以查看和操作集羣資源了
#查看集羣組件狀態
[root@k8s-master1 k8s-hxu]# kubectl cluster-info
Kubernetes control plane is running at https://192.168.7.10:6443

[root@k8s-master1 k8s-hxu]#  kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                       ERROR
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   
controller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused   
etcd-0               Healthy     {"health":"true"}

[root@k8s-master1 k8s-hxu]# kubectl get all --all-namespaces
NAMESPACE   NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
default     service/kubernetes   ClusterIP   10.255.0.1   <none>        443/TCP   

配置kubectl子命令補全

參考官方文檔:

https://v1-20.docs.kubernetes.io/zh/docs/tasks/tools/install-kubectl-linux/

[root@k8s-master1 k8s-hxu]# yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
[root@k8s-master1 k8s-hxu]# kubectl completion bash > ~/.kube/completion.bash.inc
source '/root/.kube/completion.bash.inc'
source $HOME/.bash_profile

在文件 ~/.bashrc 中導入(source)補全腳本:

[root@k8s-master1 k8s-hxu]# echo 'source <(kubectl completion bash)' >>~/.bashrc
將補全腳本添加到目錄 /etc/bash_completion.d 中:
[root@k8s-master1 k8s-hxu]# kubectl completion bash >/etc/bash_completion.d/kubectl
如果 kubectl 有關聯的別名,你可以擴展 shell 補全來適配此別名:
[root@k8s-master1 k8s-hxu]# echo 'alias k=kubectl' >>~/.bashrc
[root@k8s-master1 k8s-hxu]# echo 'complete -F __start_kubectl k' >>~/.bashrc

4.4 部署部署kube-controller-manager組件

創建csr請求文件

[root@k8s-master1 k8s-hxu]# vim kube-controller-manager-csr.json 
{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "192.168.7.10",
      "192.168.7.11",
      "192.168.7.12",
      "192.168.7.50"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "Beijing",
        "L": "bj",
        "O": "system:kube-controller-manager",
        "OU": "system"
      }
    ]
}

注: hosts 列表包含所有 kube-controller-manager 節點 IP; CN 爲 system:kube-controller-manager、O 爲 system:kube-controller-manager,kubernetes 內置的 ClusterRoleBindings system:kube-controller-manager 賦予 kube-controller-manager 工作所需的權限

#生成證書

[root@k8s-master1 k8s-hxu]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

創建kube-controller-manager的kubeconfig
1.設置集羣參數

[root@k8s-master1 k8s-hxu]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.7.10:6443 --kubeconfig=kube-controller-manager.kubeconfig
Cluster "kubernetes" set.

2.設置客戶端認證參數

[root@k8s-master1 k8s-hxu]# kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig
User "system:kube-controller-manager" set.

3.設置上下文參數

[root@k8s-master1 k8s-hxu]# kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
Context "system:kube-controller-manager" created.
[root@k8s-master1 k8s-hxu]# cat kube-controller-manager.kubeconfig
contexts:
- context:
    cluster: kubernetes
    user: system:kube-controller-manager
  name: system:kube-controller-manager

4.設置當前上下文

[root@k8s-master1 k8s-hxu]# kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
Switched to context "system:kube-controller-manager".
[root@k8s-master1 k8s-hxu]# cat kube-controller-manager.kubeconfig
....
current-context: system:kube-controller-manager
....
#創建配置文件kube-controller-manager.conf
[root@k8s-master1 k8s-hxu]# vim kube-controller-manager.conf 
KUBE_CONTROLLER_MANAGER_OPTS="--port=0 \
  --secure-port=10252 \
  --bind-address=127.0.0.1 \
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
  --service-cluster-ip-range=10.255.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --allocate-node-cidrs=true \
  --cluster-cidr=10.0.0.0/16 \
  --experimental-cluster-signing-duration=8760h \
  --root-ca-file=/etc/kubernetes/ssl/ca.pem \
  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --leader-elect=true \
  --feature-gates=RotateKubeletServerCertificate=true \
  --controllers=*,bootstrapsigner,tokencleaner \
  --horizontal-pod-autoscaler-use-rest-clients=true \
  --horizontal-pod-autoscaler-sync-period=10s \
  --tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \
  --tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \
  --use-service-account-credentials=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2"

創建啓動文件

[root@k8s-master1 k8s-hxu]# vim kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kuberntes/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target

啓動服務

# 先拷貝相關的證書文件到指定的目錄下,並同步到master2和master3節點
[root@k8s-master1 k8s-hxu]# cp kube-controller-manager*.pem /etc/kubernetes/ssl/
cp kube-controller-manager.kubeconfig /etc/kubernetes/
cp kube-controller-manager.conf /etc/kubernetes/
cp kube-controller-manager.service /usr/lib/systemd/system/
# 拷貝文件到master2和master3節點
rsync -vaz kube-controller-manager*.pem k8s-master2:/etc/kubernetes/ssl/
rsync -vaz kube-controller-manager*.pem k8s-master3:/etc/kubernetes/ssl/
rsync -vaz kube-controller-manager.kubeconfig kube-controller-manager.conf k8s-master2:/etc/kubernetes/
rsync -vaz kube-controller-manager.kubeconfig kube-controller-manager.conf k8s-master3:/etc/kubernetes/
rsync -vaz kube-controller-manager.service k8s-master2:/usr/lib/systemd/system/
rsync -vaz kube-controller-manager.service k8s-master3:/usr/lib/systemd/system/
# 啓動
[root@k8s-master1 k8s-hxu]# systemctl daemon-reload 
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager
Active: active (running) since Mon 2021-09-20 15:33:19 CST; 35ms ago

4.5 部署kube-scheduler組件

創建csr請求

[root@k8s-master1 k8s-hxu]# vim kube-scheduler-csr.json 
{
    "CN": "system:kube-scheduler",
    "hosts": [
      "127.0.0.1",
      "192.168.7.10"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "Beijing",
        "L": "bj",
        "O": "system:kube-scheduler",
        "OU": "system"
      }
    ]
}

注: hosts 列表包含所有 kube-scheduler 節點 IP; CN 爲 system:kube-scheduler、O 爲 system:kube-scheduler,kubernetes 內置的 ClusterRoleBindings system:kube-scheduler 將賦予 kube-scheduler 工作所需的權限。

生成證書

[root@k8s-master1 k8s-hxu]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

創建kube-scheduler的kubeconfig
1.設置集羣參數

[root@k8s-master1 k8s-hxu]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.7.10:6443 --kubeconfig=kube-scheduler.kubeconfig
Cluster "kubernetes" set.

2.設置客戶端認證參數

[root@k8s-master1 k8s-hxu]# kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig
User "system:kube-scheduler" set

3.設置上下文參數

[root@k8s-master1 k8s-hxu]# kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
Context "system:kube-scheduler" created.

4.設置默認上下文

[root@k8s-master1 k8s-hxu]# kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
Switched to context "system:kube-scheduler".
#創建配置文件kube-scheduler.conf
[root@k8s-master1 k8s-hxu]# vim kube-scheduler.conf 
KUBE_SCHEDULER_OPTS="--address=127.0.0.1 \
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
--leader-elect=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2"

#創建服務啓動文件

[root@k8s-master1 k8s-hxu]# vim kube-scheduler.service 
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

啓動服務

[root@k8s-master1 k8s-hxu]# cp kube-scheduler*.pem /etc/kubernetes/ssl/
cp kube-scheduler.kubeconfig /etc/kubernetes/
cp kube-scheduler.conf /etc/kubernetes/
cp kube-scheduler.service /usr/lib/systemd/system/
# 同步文件到其他節點
rsync -vaz kube-scheduler*.pem k8s-master2:/etc/kubernetes/ssl/
rsync -vaz kube-scheduler*.pem k8s-master3:/etc/kubernetes/ssl/
rsync -vaz kube-scheduler.kubeconfig kube-scheduler.conf k8s-master2:/etc/kubernetes/
rsync -vaz kube-scheduler.kubeconfig kube-scheduler.conf k8s-master3:/etc/kubernetes/
rsync -vaz kube-scheduler.service k8s-master2:/usr/lib/systemd/system/
rsync -vaz kube-scheduler.service k8s-master3:/usr/lib/systemd/system/

[root@k8s-master1 k8s-hxu]# systemctl daemon-reload && systemctl enable kube-scheduler && systemctl start kube-scheduler && systemctl status kube-scheduler
● kube-scheduler.service - Kubernetes Scheduler
   Active: active (running) since Wed

[root@k8s-master1 k8s-hxu]# netstat -ntlp|grep kube-schedule
tcp        0      0 127.0.0.1:10251         0.0.0.0:*               LISTEN      35139/kube-schedule 
tcp6       0      0 :::10259                :::*                    LISTEN      35139/kube-schedule

先導入離線鏡像壓縮包coredns

把pause-cordns.tar.gz上傳到k8s-node1節點,手動解壓
[root@k8s-node1 ~]# docker load -i pause-cordns.tar.gz

image

4.6 部署kubelet組件

kubelet: 每個Node節點上的kubelet定期就會調用API Server的REST接口報告自身狀態,API Server接收這些信息後,將節點狀態信息更新到etcd中。kubelet也通過API Server監聽Pod信息,從而對Node機器上的POD進行管理,如創建、刪除、更新Pod,因爲控制節點不需要調度pod所以不需要在控制節點安裝kubelet組件,爲了方便生成證書所以下面操作還是在master1上操作最後同步到node1節點上
以下操作在k8s-master1上操作
創建kubelet-bootstrap.kubeconfig

[root@k8s-master1 k8s-hxu]# cd /root/k8s-hxu/
[root@k8s-master1 k8s-hxu]# BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv)

[root@k8s-master1 k8s-hxu]#  kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.7.10:6443 --kubeconfig=kubelet-bootstrap.kubeconfig

[root@k8s-master1 k8s-hxu]# kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig

[root@k8s-master1 k8s-hxu]# kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig

[root@k8s-master1 k8s-hxu]# kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig

[root@k8s-master1 k8s-hxu]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

#創建配置文件kubelet.json
"cgroupDriver": "systemd"要和docker的驅動一致。
address替換爲自己k8s-node1的IP地址。

[root@k8s-master1 k8s-hxu]# vim kubelet.json 
{
  "kind": "KubeletConfiguration",
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "authentication": {
    "x509": {
      "clientCAFile": "/etc/kubernetes/ssl/ca.pem"
    },
    "webhook": {
      "enabled": true,
      "cacheTTL": "2m0s"
    },
    "anonymous": {
      "enabled": false
    }
  },
  "authorization": {
    "mode": "Webhook",
    "webhook": {
      "cacheAuthorizedTTL": "5m0s",
      "cacheUnauthorizedTTL": "30s"
    }
  },
  "address": "192.168.7.13",
  "port": 10250,
  "readOnlyPort": 10255,
  "cgroupDriver": "systemd",
  "hairpinMode": "promiscuous-bridge",
  "serializeImagePulls": false,
  "featureGates": {
    "RotateKubeletClientCertificate": true,
    "RotateKubeletServerCertificate": true
  },
  "clusterDomain": "cluster.local.",
  "clusterDNS": ["10.255.0.2"]
}

添加服務啓動文件

[root@k8s-master1 k8s-hxu]# vim kubelet.service 
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
  --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \
  --cert-dir=/etc/kubernetes/ssl \
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
  --config=/etc/kubernetes/kubelet.json \
  --network-plugin=cni \
  --pod-infra-container-image=k8s.gcr.io/pause:3.2 \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5
 
[Install]
WantedBy=multi-user.target
#注: –hostname-override:顯示名稱,集羣中唯一 
–network-plugin:啓用CNI 
–kubeconfig:空路徑,會自動生成,後面用於連接apiserver 
–bootstrap-kubeconfig:kubelet首次啓動向apiserver申請證書
–config:配置參數文件 
–cert-dir:kubelet證書生成目錄 
–pod-infra-container-image:管理Pod網絡容器的鏡像
# 注:kubelete.json配置文件address改爲各個節點的ip地址,在各個work節點上啓動服務
[root@k8s-node1 ~]# mkdir /etc/kubernetes/ssl -p
[root@k8s-master1 k8s-hxu]# scp kubelet-bootstrap.kubeconfig kubelet.json k8s-node1:/etc/kubernetes/
scp  ca.pem k8s-node1:/etc/kubernetes/ssl/
scp  kubelet.service k8s-node1:/usr/lib/systemd/system/

#啓動kubelet服務
[root@k8s-node1 ~]# mkdir /var/lib/kubelet && mkdir /var/log/kubernetes
[root@k8s-node1 ~]#  systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet && systemctl status kubelet
   Active: active (running) since 

確認kubelet服務啓動成功後,接着到k8s-master1節點上Approve一下bootstrap請求。
執行如下命令可以看到worker節點發送了 CSR 請求:

[root@k8s-master1 k8s-hxu]# kubectl get csr
在下面截圖中可以查看到有一條kubelet的請求,狀態爲pending
在服務端批准客戶端的請求
[root@k8s-master1 k8s-hxu]# kubectl certificate approve node-csr-0etHx5MGVH6iIP1zR67lnbUSVpMsWmZ7cLuAspuumAU
[root@k8s-master1 k8s-hxu]# kubectl get csr

image

[root@k8s-master1 k8s-hxu]# kubectl get nodes
NAME    STATUS     ROLES    AGE   VERSION
k8s-node1   NotReady   <none>   40s   v1.20.7
#注意:STATUS是NotReady表示還沒有安裝網絡插件
進到node1的/etc/kubernetes/目錄下就可以看到生成的文件了
[root@k8s-node1 k8s-hxu]# cd /etc/kubernetes/
[root@k8s-node1 kubernetes]# ll
-rw------- 1 root root 2.1K Sep 20 16:30 kubelet-bootstrap.kubeconfig
-rw-r--r-- 1 root root  801 Sep 20 16:30 kubelet.json
-rw------- 1 root root 2.3K Sep 20 16:42 kubelet.kubeconfig
drwxr-xr-x 2 root root  138 Sep 20 16:42 ssl
[root@k8s-node1 kubernetes]# ll ssl/
-rw-r--r-- 1 root root 1.4K Sep 20 16:30 ca.pem
-rw------- 1 root root 1.2K Sep 20 16:42 kubelet-client-2021-09-20-16-42-19.pem
lrwxrwxrwx 1 root root   58 Sep 20 16:42 kubelet-client-current.pem -> /etc/kubernetes/ssl/kubelet-client-2021-09-20-16-42-19.pem
-rw-r--r-- 1 root root 2.3K Sep 20 16:32 kubelet.crt
-rw------- 1 root root 1.7K Sep 20 16:32 kubelet.key

4.7 部署kube-proxy組件

創建csr請求

[root@k8s-master1 k8s-hxu]# vim kube-proxy-csr.json 
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "bj",
      "O": "k8s",
      "OU": "system"
    }
  ]
}

生成證書

[root@k8s-master1 k8s-hxu]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

#創建kubeconfig文件
[root@k8s-master1 k8s-hxu]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.7.10:6443 --kubeconfig=kube-proxy.kubeconfig

[root@k8s-master1 k8s-hxu]# kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
User "kube-proxy" set.
[root@k8s-master1 k8s-hxu]# kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
Context "default" created.
[root@k8s-master1 k8s-hxu]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
Switched to context "default".

創建kube-proxy配置文件

[root@k8s-master1 k8s-hxu]# vim kube-proxy.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 192.168.7.13 #node1的ip
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 192.168.7.0/24  #物理機網段
healthzBindAddress: 192.168.7.13:10256 #node1的ip
kind: KubeProxyConfiguration
metricsBindAddress: 192.168.7.13:10249 #node1的ip
mode: "ipvs"

創建服務啓動文件

[root@k8s-master1 k8s-hxu]# vim kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
 
[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
  --config=/etc/kubernetes/kube-proxy.yaml \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target
[root@k8s-master1 k8s-hxu]# scp  kube-proxy.kubeconfig kube-proxy.yaml k8s-node1:/etc/kubernetes/
[root@k8s-master1 k8s-hxu]#scp  kube-proxy.service k8s-node1:/usr/lib/systemd/system/

#啓動服務
[root@k8s-node1 ~]# mkdir -p /var/lib/kube-proxy
# systemctl daemon-reload
# systemctl enable kube-proxy
# systemctl restart kube-proxy
# systemctl status kube-proxy
Active: active (running) since Mon 2021-09-20 17:05:41 CST; 49ms ago

4.8 部署calico組件

解壓離線鏡像壓縮包

#把cni.tar.gz和node.tar.gz上傳到k8s-node1節點,手動解壓
[root@k8s-node1 ~]# docker load -i cni.tar.gz
[root@k8s-node1 ~]# docker load -i node.tar.gz

在master1上創建yaml文件,需要修改calico.yaml文件:

166 - name: IP_AUTODETECTION_METHOD
167              value: "can-reach=192.168.7.13 #這個ip是k8s任何一個工作節點的ip都行

創建

[root@k8s-master1 k8s-hxu]# kubectl apply -f calico.yaml
[root@k8s-master1 ~]# kubectl get pods -n kube-system
NAME                READY   STATUS    RESTARTS   AGE
calico-node-xk7n4   1/1     Running   0          13s
現在node狀態也由NoReady變爲了Ready
[root@k8s-master1 k8s-hxu]# kubectl get node
NAME        STATUS   ROLES    AGE   VERSION
k8s-node1   Ready    <none>   32m   v1.20.7

4.10 部署coredns組件

在之前我們已經上傳過pause-cordns.tar.gz包了,這裏直接創建yaml文件並創建即可

[root@k8s-master1 ~]# kubectl apply -f coredns.yaml
[root@k8s-master1 ~]# kubectl get pods -n kube-system
NAME                       READY   STATUS    RESTARTS   AGE
calico-node-xk7n4          1/1     Running   0          6m6s
coredns-7bf4bd64bd-dt8dq   1/1     Running   0          51s
[root@k8s-master1 ~]# kubectl get svc -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.255.0.2   <none>        53/UDP,53/TCP,9153/TCP   12m

4.查看集羣狀態

[root@k8s-master1 ~]# kubectl get nodes
NAME    STATUS   ROLES    AGE   VERSION
k8s-node1   Ready    <none>   38m   v1.20.7
5.測試k8s集羣部署tomcat服務
#把tomcat.tar.gz和busybox-1-28.tar.gz上傳到k8s-node1,手動解壓
[root@k8s-node1 ~]# docker load -i tomcat.tar.gz
[root@k8s-node1 ~]# docker load -i busybox-1-28.tar.gz 
[root@k8s-master1 ~]# kubectl apply -f tomcat.yaml

[root@k8s-master1 ~]# kubectl get pods
NAME       READY   STATUS    RESTARTS   AGE
demo-pod   2/2     Running   0          11m
[root@k8s-master1 ~]# kubectl apply -f tomcat-service.yaml
[root@k8s-master1 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.255.0.1       <none>        443/TCP          158m
tomcat       NodePort    10.255.227.179   <none>        8080:30080/TCP   19m

在瀏覽器訪問k8s-node1節點的ip:30080即可請求到瀏覽器

6.驗證cordns是否正常
[root@k8s-master1 ~]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
/ # ping www.baidu.com
PING www.baidu.com (39.156.66.18): 56 data bytes
64 bytes from 39.156.66.18: seq=0 ttl=127 time=39.3 ms
#通過上面可以看到能訪問網絡
/ # nslookup kubernetes.default.svc.cluster.local
Server:		10.255.0.2
Address:	10.255.0.2:53
Name:	kubernetes.default.svc.cluster.local
Address: 10.255.0.1

/ # nslookup tomcat.default.svc.cluster.local
Server:    10.255.0.2
Address 1: 10.255.0.2 kube-dns.kube-system.svc.cluster.local

Name:      tomcat.default.svc.cluster.local
Address 1: 10.255.227.179 tomcat.default.svc.cluster.local

#注意:
busybox要用指定的1.28版本,不能用最新版本,最新版本,nslookup會解析不到dns和ip,報錯如下:
/ # nslookup kubernetes.default.svc.cluster.local
Server:		10.255.0.2
Address:	10.255.0.2:53
*** Can't find kubernetes.default.svc.cluster.local: No answer
*** Can't find kubernetes.default.svc.cluster.local: No answer

10.255.0.2 就是我們coreDNS的clusterIP,說明coreDNS配置好了。
解析內部Service的名稱,是通過coreDNS去解析的。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章