Centos7 二進制安裝 Kubernetes 1.13

目錄

1、目錄

1.1、什麼是 Kubernetes?

  Kubernetes,簡稱 k8s(k,8 個字符,s)或者 kube,是一個開源的 Linux 容器自動化運維平臺,它消除了容器化應用程序在部署、伸縮時涉及到的許多手動操作。
  Kubernetes 最開始是由 Google 的工程師設計開發的。Google 作爲 Linux 容器技術的早期貢獻者之一,曾公開演講介紹 Google 如何將一切都運行於容器之中(這是 Google 的雲服務背後的技術)。Google 一週內的容器部署超過 20 億次,全部的工作都由內部平臺 Borg 支撐。Borg 是 Kubernetes 的前身,幾年來開發 Borg 的經驗教訓也成了影響 Kubernetes 中許多技術的主要因素。
  

1.2、Kubernetes 有哪些優勢?

  使用 Kubernetes,你可以快速、高效地滿足用戶以下的需求:

  • 快速精準地部署應用程序
  • 即時伸縮你的應用程序
  • 無縫展現新特徵
  • 限制硬件用量僅爲所需資源

  
  Kubernetes 的優勢

  • 可移動: 公有云、私有云、混合雲、多態雲
  • 可擴展: 模塊化、插件化、可掛載、可組合
  • 自修復: 自動部署、自動重啓、自動複製、自動伸縮

  
  Google 公司於 2014 年啓動了 Kubernetes 項目。Kubernetes 是在 Google 的長達 15 年的成規模的產品級任務的經驗下構建的,結合了來自社區的最佳創意和實踐經驗

  

2、環境準備

本文中的案例會有兩臺機器,他們的Host和IP地址如下

IP地址 主機名
10.0.0.100 c0(master)
10.0.0.101 c1(master)
10.0.0.102 c2
10.0.0.103 c3

  
四臺機器的 hostc0 爲例:

[root@c0 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.0.100 c0
10.0.0.101 c1
10.0.0.102 c2
10.0.0.103 c3

  

2.1、網絡配置

  以下以c0爲例

[root@c0 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=eth0
UUID=6d8d9ad6-37b5-431a-ab16-47d0aa00d01f
DEVICE=eth0
ONBOOT=yes
IPADDR0=10.0.0.100
PREFIXO0=24
GATEWAY0=10.0.0.1
DNS1=10.0.0.1
DNS2=8.8.8.8

  
  重啓網絡:

[root@c0 ~]# service network restart

  
  更改源爲阿里雲

[root@c0 ~]# yum install -y wget
[root@c0 ~]# cd /etc/yum.repos.d/
[root@c0 yum.repos.d]# mv CentOS-Base.repo CentOS-Base.repo.bak
[root@c0 yum.repos.d]# wget http://mirrors.aliyun.com/repo/Centos-7.repo
[root@c0 yum.repos.d]# wget http://mirrors.163.com/.help/CentOS7-Base-163.repo
[root@c0 yum.repos.d]# yum clean all
[root@c0 yum.repos.d]# yum makecache

  
  安裝網絡工具包和基礎工具包

[root@c0 ~]# yum install net-tools checkpolicy gcc dkms foomatic openssh-server bash-completion -y

  

2.2、更改 HOSTNAME

  在四臺機器上依次設置 hostname,以下以c0爲例

[root@c0 ~]# hostnamectl --static set-hostname c0
[root@c0 ~]# hostnamectl status
   Static hostname: c0
         Icon name: computer-vm
           Chassis: vm
        Machine ID: 04c3f6d56e788345859875d9f49bd4bd
           Boot ID: ba02919abe4245aba673aaf5f778ad10
    Virtualization: kvm
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 3.10.0-957.el7.x86_64
      Architecture: x86-64

  

2.3、配置ssh免密碼登錄登錄

  每一臺機器都單獨生成

[root@c0 ~]# ssh-keygen
#一路按回車到最後

  
  將 ssh-keygen 生成的密鑰,分別複製到其他三臺機器,以下以 c0 爲例

[root@c0 ~]# ssh-copy-id c0
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'c0 (10.0.0.100)' can't be established.
ECDSA key fingerprint is SHA256:O8y8TBSZfBYiHPvJPPuAd058zkfsOfnBjvnf/3cvOCQ.
ECDSA key fingerprint is MD5:da:3c:29:65:f2:86:e9:61:cb:39:57:5b:5e:e2:77:7c.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c0's password:
[root@c0 ~]# rm -rf ~/.ssh/known_hosts
[root@c0 ~]# clear
[root@c0 ~]# ssh-copy-id c0
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'c0 (10.0.0.100)' can't be established.
ECDSA key fingerprint is SHA256:O8y8TBSZfBYiHPvJPPuAd058zkfsOfnBjvnf/3cvOCQ.
ECDSA key fingerprint is MD5:da:3c:29:65:f2:86:e9:61:cb:39:57:5b:5e:e2:77:7c.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c0's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'c0'"
and check to make sure that only the key(s) you wanted were added.

[root@c0 ~]# ssh-copy-id c1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'c1 (10.0.0.101)' can't be established.
ECDSA key fingerprint is SHA256:O8y8TBSZfBYiHPvJPPuAd058zkfsOfnBjvnf/3cvOCQ.
ECDSA key fingerprint is MD5:da:3c:29:65:f2:86:e9:61:cb:39:57:5b:5e:e2:77:7c.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c1's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'c1'"
and check to make sure that only the key(s) you wanted were added.

[root@c0 ~]# ssh-copy-id c2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'c2 (10.0.0.102)' can't be established.
ECDSA key fingerprint is SHA256:O8y8TBSZfBYiHPvJPPuAd058zkfsOfnBjvnf/3cvOCQ.
ECDSA key fingerprint is MD5:da:3c:29:65:f2:86:e9:61:cb:39:57:5b:5e:e2:77:7c.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c2's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'c2'"
and check to make sure that only the key(s) you wanted were added.

[root@c0 ~]# ssh-copy-id c3
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'c3 (10.0.0.103)' can't be established.
ECDSA key fingerprint is SHA256:O8y8TBSZfBYiHPvJPPuAd058zkfsOfnBjvnf/3cvOCQ.
ECDSA key fingerprint is MD5:da:3c:29:65:f2:86:e9:61:cb:39:57:5b:5e:e2:77:7c.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c3's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'c3'"
and check to make sure that only the key(s) you wanted were added.

  
  測試密鑰是否配置成功

[root@c0 ~]# for N in $(seq 0 3); do ssh c$N hostname; done;
c0
c1
c2
c3

  

2.4、關閉防火牆

  在每一臺機器上運行以下命令,以 c0 爲例:

[root@c0 ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

  

2.5、關閉交換分區

[root@c0 ~]# for N in $(seq 0 3); do ssh c$N swapoff -a; done;

關閉前和關閉後,可以使用free -h命令查看swap的狀態,關閉後的total應該是0

  
  在每一臺機器上編輯配置文件: /etc/fstab , 註釋最後一條/dev/mapper/centos-swap swap,以c0爲例

[root@c0 ~]# sed -i "s/\/dev\/mapper\/centos-swap/# \/dev\/mapper\/centos-swap/" /etc/fstab
[root@c1 ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Mon Jan 28 11:49:11 2019
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=93572ab6-90da-4cfe-83a4-93be7ad8597c /boot                   xfs     defaults        0 0
# /dev/mapper/centos-swap swap                    swap    defaults        0 0

  

2.6、關閉 SeLinux

  在每一臺機器上,關閉 SeLinux,以 c0 爲例

[root@c0 ~]# sed -i "s/SELINUX=enforcing/SELINUX=disabled/" /etc/selinux/config
[root@c0 ~]# cat /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

SELinux就是安全加強的Linux

  

2.7、安裝 NTP

  安裝 NTP 時間同步工具,並啓動 NTP

[root@c0 ~]# for N in $(seq 0 3); do ssh c$N yum install ntp -y; done;

  
  在每一臺機器上,設置 NTP 開機啓動

[root@c0 ~]# systemctl enable ntpd && systemctl start ntpd

  
  依次查看每臺機器上的時間:

[root@c0 ~]# for N in $(seq 0 3); do ssh c$N date; done;
Sat Feb  9 18:11:48 CST 2019
Sat Feb  9 18:11:48 CST 2019
Sat Feb  9 18:11:49 CST 2019
Sat Feb  9 18:11:49 CST 2019

  

2.8、安裝及配置 CFSSL

  使用 CFSSL 能夠構建本地CA,生成後面需要使用的證書。

[root@c0 ~]# mkdir -p /home/work/_src
[root@c0 ~]# cd /home/work/_src
[root@c0 _src]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
[root@c0 _src]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
[root@c0 _src]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
[root@c0 _src]# chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
[root@c0 _src]# mv cfssl_linux-amd64 /usr/local/bin/cfssl
[root@c0 _src]# mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
[root@c0 _src]# mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

  

2.9、創建安裝目錄

  創建後面要用到的 ETCDKubernetes 使用目錄

[root@c0 _src]# for N in $(seq 0 3); do ssh c$N mkdir /home/work/_app/k8s/etcd/{bin,cfg,ssl} -p; done;
[root@c0 _src]# for N in $(seq 0 3); do ssh c$N mkdir /home/work/_app/k8s/kubernetes/{bin,cfg,ssl,ssl_cert} -p; done;
[root@c0 _src]# for N in $(seq 0 3); do ssh c$N mkdir /home/work/_data/etcd -p; done;

  

2.10、升級內核

  因爲3.10版本內核且缺少 ip_vs_fo.ko 模塊,將導致 kube-proxy 無法開啓ipvs模式。ip_vs_fo.ko 模塊的最早版本爲3.19版本,這個內核版本在 RedHat 系列發行版的常見RPM源中是不存在的。

[root@c0 ~]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
[root@c0 ~]# yum --enablerepo=elrepo-kernel install kernel-ml-devel kernel-ml -y

  
  重啓系統 reboot 後,手動選擇新內核,然後輸入以下命令,可以查看新內核的狀態:

[root@c0 ~]# hostnamectl
   Static hostname: c0
         Icon name: computer-vm
           Chassis: vm
        Machine ID: 04c3f6d56e788345859875d9f49bd4bd
           Boot ID: 40a19388698f4907bd233a8cff76f36e
    Virtualization: kvm
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 4.20.7-1.el7.elrepo.x86_64
      Architecture: x86-64

  

3、安裝 Docker 18.06.1-ce

3.1、刪除舊版本的 Docker

  官方提供的刪除方法

$ sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine

  
  另外一種刪除舊版的 Docker 方法,先查詢安裝過的 Docker

[root@c0 ~]# yum list installed | grep docker
Repository base is listed more than once in the configuration
Repository updates is listed more than once in the configuration
Repository extras is listed more than once in the configuration
Repository centosplus is listed more than once in the configuration
containerd.io.x86_64            1.2.2-3.el7                    @docker-ce-stable
docker-ce.x86_64                3:18.09.1-3.el7                @docker-ce-stable
docker-ce-cli.x86_64            1:18.09.1-3.el7                @docker-ce-stable

  
  刪除已安裝的 Docker

[root@c0 ~]# yum -y remove docker-ce.x86_64 docker-ce-cli.x86_64 containerd.io.x86_64

  
  刪除 Docker 鏡像/容器

[root@c0 ~]# rm -rf /var/lib/docker

  

3.2、設置存儲庫

  安裝所需要的包,yum-utils 提供了 yum-config-manager 實用程序, device-mapper-persistent-datalvm2devicemapper 需要的存儲驅動程序。
  在每一臺機器上操作,以 c0 爲例

[root@c0 ~]# sudo yum install -y yum-utils device-mapper-persistent-data lvm2
[root@c0 ~]# sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

  

3.3、安裝 Docker

[root@c0 ~]# sudo yum install docker-ce-18.06.1.ce-3.el7 -y

  

3.4、啓動 Docker

[root@c0 ~]# systemctl enable docker && systemctl start docker

  

4、安裝 ETCD 3.3.10

4.1、創建 ETCD 證書

4.1.1、生成 ETCD SERVER 證書用到的JSON請求文件

[root@c0 ~]# mkdir -p /home/work/_src/ssl_etcd
[root@c0 ~]# cd /home/work/_src/ssl_etcd
[root@c0 ssl_etcd]# cat << EOF | tee ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "etcd": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

默認策略,指定了證書的有效期是10年(87600h)
etcd策略,指定了證書的用途
signing, 表示該證書可用於簽名其它證書;生成的 ca.pem 證書中 CA=TRUE
server auth:表示 client 可以用該 CA 對 server 提供的證書進行驗證
client auth:表示 server 可以用該 CA 對 client 提供的證書進行驗證

  

4.1.2、創建 ETCD CA 證書配置文件

[root@c0 ssl_etcd]# cat << EOF | tee ca-csr.json
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

  

4.1.3、創建 ETCD SERVER 證書配置文件

[root@c0 ssl_etcd]# cat << EOF | tee server-csr.json
{
    "CN": "etcd",
    "hosts": [
    "10.0.0.100",
    "10.0.0.101",
    "10.0.0.102",
    "10.0.0.103"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

  

4.1.4、生成 ETCD CA 證書和私鑰

[root@c0 ssl_etcd]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2019/02/14 18:44:37 [INFO] generating a new CA key and certificate from CSR
2019/02/14 18:44:37 [INFO] generate received request
2019/02/14 18:44:37 [INFO] received CSR
2019/02/14 18:44:37 [INFO] generating key: rsa-2048
2019/02/14 18:44:38 [INFO] encoded CSR
2019/02/14 18:44:38 [INFO] signed certificate with serial number 384346866475232855604658229421854651219342845660
[root@c0 ssl_etcd]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  server-csr.json

  

4.1.5、生成 ETCD SERVER 證書和私鑰

[root@c0 ssl_etcd]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd server-csr.json | cfssljson -bare server
2019/02/09 20:52:57 [INFO] generate received request
2019/02/09 20:52:57 [INFO] received CSR
2019/02/09 20:52:57 [INFO] generating key: rsa-2048
2019/02/09 20:52:57 [INFO] encoded CSR
2019/02/09 20:52:57 [INFO] signed certificate with serial number 373071566605311458179949133441319838683720611466
2019/02/09 20:52:57 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@c0 ssl_etcd]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  server.csr  server-csr.json  server-key.pem  server.pem
[root@c0 _src]# cp server.pem server-key.pem /home/work/_app/k8s/etcd/ssl/

  
  將生成的證書,複製到 etchd 使用目錄

[root@c0 ssl_etcd]# cp *.pem /home/work/_app/k8s/etcd/ssl/

  

4.2、安裝 ETCD

4.2.1、下載 ETCD

[root@c0 ssl_etcd]# cd /home/work/_src/
[root@c0 _src]# wget https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz
[root@c0 _src]# tar -xvf etcd-v3.3.10-linux-amd64.tar.gz
[root@c0 _src]# cd etcd-v3.3.10-linux-amd64
[root@c0 etcd-v3.3.10-linux-amd64]# cp etcd etcdctl /home/work/_app/k8s/etcd/bin/

  

4.2.2、創建 ETCD 系統啓動文件

  創建 /usr/lib/systemd/system/etcd.service 文件並保存,內容如下:

[root@c0 etcd-v3.3.10-linux-amd64]# cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/home/work/_app/k8s/etcd/cfg/etcd.conf
ExecStart=/home/work/_app/k8s/etcd/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/home/work/_app/k8s/etcd/ssl/server.pem \
--key-file=/home/work/_app/k8s/etcd/ssl/server-key.pem \
--peer-cert-file=/home/work/_app/k8s/etcd/ssl/server.pem \
--peer-key-file=/home/work/_app/k8s/etcd/ssl/server-key.pem \
--trusted-ca-file=/home/work/_app/k8s/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/home/work/_app/k8s/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

  

4.2.3、將 ETCD 啓動文件、證書文件、系統啓動文件複製到其他節點

[root@c0 ~]# for N in $(seq 0 3); do scp -r /home/work/_app/k8s/etcd c$N:/home/work/_app/k8s/; done;
[root@c0 ~]# for N in $(seq 0 3); do scp -r /usr/lib/systemd/system/etcd.service c$N:/usr/lib/systemd/system/etcd.service; done;

  

4.2.4、ETCD 主配置文件

  在 c0 創建 /home/work/_app/k8s/etcd/cfg/etcd.conf 文件,內容如下:

[root@c0 _src]# cat << EOF | tee /home/work/_app/k8s/etcd/cfg/etcd.conf
#[Member]
# ETCD的節點名
ETCD_NAME="etcd00"
# ETCD的數據存儲目錄
ETCD_DATA_DIR="/home/work/_data/etcd"
# 該節點與其他節點通信時所監聽的地址列表,多個地址使用逗號隔開,其格式可以劃分爲scheme://IP:PORT,這裏的scheme可以是http、https
ETCD_LISTEN_PEER_URLS="https://10.0.0.100:2380"
# 該節點與客戶端通信時監聽的地址列表
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.100:2379"
 
#[Clustering]
# 該成員節點在整個集羣中的通信地址列表,這個地址用來傳輸集羣數據的地址。因此這個地址必須是可以連接集羣中所有的成員的。
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.100:2380"
# 配置集羣內部所有成員地址,其格式爲:ETCD_NAME=ETCD_INITIAL_ADVERTISE_PEER_URLS,如果有多個使用逗號隔開
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.100:2379"
ETCD_INITIAL_CLUSTER="etcd00=https://10.0.0.100:2380,etcd01=https://10.0.0.101:2380,etcd02=https://10.0.0.102:2380,etcd03=https://10.0.0.103:2380"
# 初始化集羣token
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
# 初始化集羣狀態,new表示新建
ETCD_INITIAL_CLUSTER_STATE="new"

#[Security]
ETCD_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"

EOF

  
  在 c1 創建 /home/work/_app/k8s/etcd/cfg/etcd.conf 文件,內容如下:

[root@c1 _src]# cat << EOF | tee /home/work/_app/k8s/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/home/work/_data/etcd"
ETCD_LISTEN_PEER_URLS="https://10.0.0.101:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.101:2379"
 
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.101:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.101:2379"
ETCD_INITIAL_CLUSTER="etcd00=https://10.0.0.100:2380,etcd01=https://10.0.0.101:2380,etcd02=https://10.0.0.102:2380,etcd03=https://10.0.0.103:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#[Security]
ETCD_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"

EOF

  
  在 c2 創建 /home/work/_app/k8s/etcd/cfg/etcd.conf 文件,內容如下:

[root@c2 _src]# cat << EOF | tee /home/work/_app/k8s/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/home/work/_data/etcd"
ETCD_LISTEN_PEER_URLS="https://10.0.0.102:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.102:2379"
 
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.102:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.102:2379"
ETCD_INITIAL_CLUSTER="etcd00=https://10.0.0.100:2380,etcd01=https://10.0.0.101:2380,etcd02=https://10.0.0.102:2380,etcd03=https://10.0.0.103:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#[Security]
ETCD_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"

EOF

  
  在 c3 創建 /home/work/_app/k8s/etcd/cfg/etcd.conf 文件,內容如下:

[root@c3 _src]# cat << EOF | tee /home/work/_app/k8s/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/home/work/_data/etcd"
ETCD_LISTEN_PEER_URLS="https://10.0.0.103:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.103:2379"
 
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.103:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.103:2379"
ETCD_INITIAL_CLUSTER="etcd00=https://10.0.0.100:2380,etcd01=https://10.0.0.101:2380,etcd02=https://10.0.0.102:2380,etcd03=https://10.0.0.103:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#[Security]
ETCD_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"

EOF

  

4.2.5、啓動 ETCD 服務

  在每一臺節點機器上單獨運行

[root@c0 _src]# systemctl daemon-reload && systemctl enable etcd && systemctl start etcd

  

4.2.6、檢查 ETCD 服務運行狀態

[root@c0 _src]# /home/work/_app/k8s/etcd/bin/etcdctl --ca-file=/home/work/_app/k8s/etcd/ssl/ca.pem --cert-file=/home/work/_app/k8s/etcd/ssl/server.pem --key-file=/home/work/_app/k8s/etcd/ssl/server-key.pem cluster-health
member 2cba54b8e3ba988a is healthy: got healthy result from https://10.0.0.103:2379
member 7c12135a398849e3 is healthy: got healthy result from https://10.0.0.102:2379
member 99c2fd4fe11e28d9 is healthy: got healthy result from https://10.0.0.100:2379
member f2fd0c12369e0d75 is healthy: got healthy result from https://10.0.0.101:2379
cluster is healthy

  

4.2.7、查看 ETCD 集羣成員信息

[root@c0 _src]# /home/work/_app/k8s/etcd/bin/etcdctl --ca-file=/home/work/_app/k8s/etcd/ssl/ca.pem --cert-file=/home/work/_app/k8s/etcd/ssl/server.pem --key-file=/home/work/_app/k8s/etcd/ssl/server-key.pem  member list
2cba54b8e3ba988a: name=etcd03 peerURLs=https://10.0.0.103:2380 clientURLs=https://10.0.0.103:2379 isLeader=false
7c12135a398849e3: name=etcd02 peerURLs=https://10.0.0.102:2380 clientURLs=https://10.0.0.102:2379 isLeader=false
99c2fd4fe11e28d9: name=etcd00 peerURLs=https://10.0.0.100:2380 clientURLs=https://10.0.0.100:2379 isLeader=true
f2fd0c12369e0d75: name=etcd01 peerURLs=https://10.0.0.101:2380 clientURLs=https://10.0.0.101:2379 isLeader=false

  

5、安裝 Flannel v0.11.0

5.1、Flanneld 網絡安裝

  Flannel 實質上是一種“覆蓋網絡(overlay network)”,也就是將TCP數據包裝在另一種網絡包裏面進行路由轉發和通信,目前已經支持UDP、VxLAN、AWS VPC和GCE路由等數據轉發方式。FlannelKubernetes中用於配置第三層(網絡層)網絡結構。
  Flannel 負責在集羣中的多個節點之間提供第 3 層 IPv4 網絡。Flannel 不控制容器如何與主機聯網,只負責主機之間如何傳輸流量。但是,Flannel 確實爲 Kubernetes 提供了 CNI 插件,並提供了與 Docker 集成的指導。
Kubernetes-3

沒有 Flanneld 網絡,Node節點間的 pod 不能通信,只能 Node 內通信。
Flanneld 服務啓動時主要做了以下幾步的工作: 從 ETCD 中獲取 NetWork 的配置信息劃分 Subnet,並在 ETCD 中進行註冊,將子網信息記錄到 /run/flannel/subnet.env

  

5.2、向 ETCD 集羣寫入網段信息

[root@c0 _src]# /home/work/_app/k8s/etcd/bin/etcdctl --ca-file=/home/work/_app/k8s/etcd/ssl/ca.pem --cert-file=/home/work/_app/k8s/etcd/ssl/server.pem --key-file=/home/work/_app/k8s/etcd/ssl/server-key.pem --endpoints="https://10.0.0.100:2379,https://10.0.0.101:2379,https://10.0.0.102:2379,https://10.0.0.103:2379"  set /coreos.com/network/config  '{ "Network": "10.244.0.0/16", "Backend": {"Type": "vxlan"}}'
{ "Network": "10.244.0.0/16", "Backend": {"Type": "vxlan"}}

Flanneld 當前版本 (v0.11.0) 不支持 ETCD v3,所以使用 ETCD v2 API 寫入配置 key 和網段數據;
寫入的 Pod 網段 ${CLUSTER_CIDR} 必須是 /16 段地址,必須與 kube-controller-manager 的 –cluster-cidr 參數值一致;

  

5.3、安裝 Flannel

[root@c0 _src]# pwd
/home/work/_src
[root@c0 _src]# wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
[root@c0 _src]# tar -xvf flannel-v0.11.0-linux-amd64.tar.gz
[root@c0 _src]# mv flanneld mk-docker-opts.sh /home/work/_app/k8s/kubernetes/bin/

  

5.4、配置 Flannel

  創建 /home/work/_app/k8s/kubernetes/cfg/flanneld 文件並保存,寫入以下內容:

[root@c0 _src]# cat /home/work/_app/k8s/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=https://10.0.0.100:2379,https://10.0.0.101:2379,https://10.0.0.102:2379,https://10.0.0.103:2379 -etcd-cafile=/home/work/_app/k8s/etcd/ssl/ca.pem -etcd-certfile=/home/work/_app/k8s/etcd/ssl/server.pem -etcd-keyfile=/home/work/_app/k8s/etcd/ssl/server-key.pem"

  

5.5、創建 Flannel 系統啓動文件

  創建 /usr/lib/systemd/system/flanneld.service 文件並保存,內容如下:

[root@c0 _src]# cat /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/home/work/_app/k8s/kubernetes/cfg/flanneld
ExecStart=/home/work/_app/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/home/work/_app/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target

mk-docker-opts.sh 腳本將分配給 Flanneld 的 Pod 子網網段信息寫入 /run/flannel/docker 文件,後續 Docker 啓動時 使用這個文件中的環境變量配置 docker0 網橋.
Flanneld 使用系統缺省路由所在的接口與其它節點通信,對於有多個網絡接口(如內網和公網)的節點,可以用 -iface 參數指定通信接口;

  

5.6、配置 Docker 啓動指定子網段

  編輯 /usr/lib/systemd/system/docker.service 文件,內容如下:

[root@c0 _src]# cat /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
# 加入環境變量的配件文件,並在 ExecStart 附加參數
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

  

5.7、將 Flannel 相關文件複製到其他機器

  主要複製 Flannel 執行文件、Flannel 配置文件、Flannel 系統啓動文件、Docker 系統啓動文件

[root@c0 _src]# for N in $(seq 0 3); do scp -r /home/work/_app/k8s/kubernetes/* c$N:/home/work/_app/k8s/kubernetes/; done;
[root@c0 _src]# for N in $(seq 0 3); do scp -r /usr/lib/systemd/system/docker.service c$N:/usr/lib/systemd/system/docker.service; done;
[root@c0 _src]# for N in $(seq 0 3); do scp -r /usr/lib/systemd/system/flanneld.service c$N:/usr/lib/systemd/system/flanneld.service; done;

  

5.8、啓動服務

  在每一臺機器上單獨運行,以 c0 爲例:

[root@c0 _src]# systemctl daemon-reload && systemctl stop docker && systemctl enable flanneld && systemctl start flanneld && systemctl start docker
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.

啓動 Flannel 前要關閉 Docker 及相關的 kubelet 這樣 Flannel 纔會覆蓋 docker0 網橋

  

5.9、查看 Flannel 服務設置 docker0 網橋狀態

[root@c0 _src]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:1c:42:50:8c:6a brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.100/8 brd 10.255.255.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::49d:e3e6:c623:9582/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether 3e:80:5d:97:53:c4 brd ff:ff:ff:ff:ff:ff
    inet 10.172.46.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::3c80:5dff:fe97:53c4/64 scope link
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:9e:df:b9:87 brd ff:ff:ff:ff:ff:ff
    inet 10.172.46.1/24 brd 10.172.46.255 scope global docker0
       valid_lft forever preferred_lft forever

  

5.10、驗證 Flannel 服務

[root@c0 _src]# for N in $(seq 0 3); do ssh c$N cat /run/flannel/subnet.env ; done;
DOCKER_OPT_BIP="--bip=10.172.46.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=10.172.46.1/24 --ip-masq=false --mtu=1450"
DOCKER_OPT_BIP="--bip=10.172.90.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=10.172.90.1/24 --ip-masq=false --mtu=1450"
DOCKER_OPT_BIP="--bip=10.172.5.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=10.172.5.1/24 --ip-masq=false --mtu=1450"
DOCKER_OPT_BIP="--bip=10.172.72.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=10.172.72.1/24 --ip-masq=false --mtu=1450"

  

6、安裝Kubernetes

6.1、創建 Kubernetes 需要的證書

6.1.1、生成 Kubernetes 證書請求的JSON請求文件

[root@c0 ~]# cd /home/work/_app/k8s/kubernetes/ssl/
[root@c0 ssl]# cat << EOF | tee ca-config.json
{
  "signing": {
    "default": {
      "expiry": "8760h"
    },
    "profiles": {
      "server": {
        "usages": [
          "signing",
          "key encipherment",
          "server auth"
        ],
        "expiry": "8760h"
      },
      "client": {
        "usages": [
          "signing",
          "key encipherment",
          "client auth"
        ],
        "expiry": "8760h"
      }
    }
  }
}
EOF

  

6.1.2、生成 Kubernetes CA 配置文件和證書

[root@c0 ssl]# cat << EOF | tee ca-csr.json
{
    "CN": "kubernetes CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

  
  初始化一個 Kubernetes CA 證書

[root@c0 ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2019/02/15 00:33:49 [INFO] generating a new CA key and certificate from CSR
2019/02/15 00:33:49 [INFO] generate received request
2019/02/15 00:33:49 [INFO] received CSR
2019/02/15 00:33:49 [INFO] generating key: rsa-2048
2019/02/15 00:33:49 [INFO] encoded CSR
2019/02/15 00:33:49 [INFO] signed certificate with serial number 19178419085322799829088564182237651657158569707
[root@c0 ssl]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

  

6.1.3、生成 Kube API Server 配置文件和證書

  創建證書配置文件

[root@c0 ssl]# cat << EOF | tee kube-apiserver-server-csr.json
{
    "CN": "kubernetes",
    "hosts": [
      "127.0.0.1",
      "10.0.0.1",
      "10.0.0.100",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "API Server"
        }
    ]
}
EOF

  
  生成 kube-apiserver 證書

[root@c0 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kube-apiserver-server-csr.json | cfssljson -bare kube-apiserver-server
2019/02/15 00:40:17 [INFO] generate received request
2019/02/15 00:40:17 [INFO] received CSR
2019/02/15 00:40:17 [INFO] generating key: rsa-2048
2019/02/15 00:40:17 [INFO] encoded CSR
2019/02/15 00:40:17 [INFO] signed certificate with serial number 73791614256163825800646464302566039201359288928
2019/02/15 00:40:17 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@c0 ssl]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  kube-apiserver-server.csr  kube-apiserver-server-csr.json  kube-apiserver-server-key.pem  kube-apiserver-server.pem

  

6.1.4、生成 kubelet client 配置文件和證書

  創建證書配置文件

[root@c0 ssl]# cat << EOF | tee kubelet-client-csr.json
{
  "CN": "kubelet",
  "hosts": [""],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "O": "k8s",
      "OU": "Kubelet",
      "ST": "Beijing"
    }
  ]
}
EOF

  
  生成 kubelet client證書

[root@c0 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kubelet-client-csr.json | cfssljson -bare kubelet-client
2019/02/15 00:44:43 [INFO] generate received request
2019/02/15 00:44:43 [INFO] received CSR
2019/02/15 00:44:43 [INFO] generating key: rsa-2048
2019/02/15 00:44:43 [INFO] encoded CSR
2019/02/15 00:44:43 [INFO] signed certificate with serial number 285651868701760571162897366975202301612567414209
2019/02/15 00:44:43 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@c0 ssl]# ls
ca-config.json  ca-csr.json  ca.pem                     kube-apiserver-server-csr.json  kube-apiserver-server.pem  kubelet-client-csr.json  kubelet-client.pem
ca.csr          ca-key.pem   kube-apiserver-server.csr  kube-apiserver-server-key.pem   kubelet-client.csr         kubelet-client-key.pem

  

6.1.5、生成 Kube-Proxy 配置文件和證書

  創建證書配置文件

[root@c0 ssl]# cat << EOF | tee kube-proxy-client-csr.json
{
  "CN": "system:kube-proxy",
  "hosts": [""],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "O": "k8s",
      "OU": "System",
      "ST": "Beijing"
    }
  ]
}
EOF

  
  生成 Kube-Proxy 證書

[root@c0 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-client-csr.json | cfssljson -bare kube-proxy-client
2019/02/15 01:14:39 [INFO] generate received request
2019/02/15 01:14:39 [INFO] received CSR
2019/02/15 01:14:39 [INFO] generating key: rsa-2048
2019/02/15 01:14:39 [INFO] encoded CSR
2019/02/15 01:14:39 [INFO] signed certificate with serial number 535503934939407075396917222976858989138817338004
2019/02/15 01:14:39 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@c0 ssl]# ls
ca-config.json  ca-csr.json  ca.pem                     kube-apiserver-server-csr.json  kube-apiserver-server.pem  kubelet-client-csr.json  kubelet-client.pem     kube-proxy-client-csr.json  kube-proxy-client.pem
ca.csr          ca-key.pem   kube-apiserver-server.csr  kube-apiserver-server-key.pem   kubelet-client.csr         kubelet-client-key.pem   kube-proxy-client.csr  kube-proxy-client-key.pem

  

6.1.6、生成 kubectl 管理員配置文件和證書

  創建 kubectl 管理員證書配置文件

[root@c0 ssl]# cat << EOF | tee kubernetes-admin-user.csr.json
{
  "CN": "admin",
  "hosts": [""],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "O": "k8s",
      "OU": "Cluster Admins",
      "ST": "Beijing"
    }
  ]
}
EOF

  
  生成 kubectl 管理員證書

[root@c0 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kubernetes-admin-user.csr.json | cfssljson -bare kubernetes-admin-user
2019/02/15 01:23:22 [INFO] generate received request
2019/02/15 01:23:22 [INFO] received CSR
2019/02/15 01:23:22 [INFO] generating key: rsa-2048
2019/02/15 01:23:22 [INFO] encoded CSR
2019/02/15 01:23:22 [INFO] signed certificate with serial number 724413523889121871668676123719532667068182658276
2019/02/15 01:23:22 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@c0 ssl]# ls
ca-config.json  ca-key.pem                 kube-apiserver-server-csr.json  kubelet-client.csr       kubelet-client.pem          kube-proxy-client-key.pem  kubernetes-admin-user.csr.json
ca.csr          ca.pem                     kube-apiserver-server-key.pem   kubelet-client-csr.json  kube-proxy-client.csr       kube-proxy-client.pem      kubernetes-admin-user-key.pem
ca-csr.json     kube-apiserver-server.csr  kube-apiserver-server.pem       kubelet-client-key.pem   kube-proxy-client-csr.json  kubernetes-admin-user.csr  kubernetes-admin-user.pem

  

6.1.7、將相關證書複製到 Kubernetes Node 節點

[root@c0 ~]# for N in $(seq 0 3); do scp -r /home/work/_app/k8s/kubernetes/ssl/*.pem c$N:/home/work/_app/k8s/kubernetes/ssl/; done;

  

6.2、部署 Kubernetes Master 節點並加入集羣

  Kubernetes Master 節點運行如下組件:

  • APIServer
      APIServer負責對外提供RESTful的kubernetes API的服務,它是系統管理指令的統一接口,任何對資源的增刪該查都要交給APIServer處理後再交給etcd,如圖,kubectl(kubernetes提供的客戶端工具,該工具內部是對kubernetes API的調用)是直接和APIServer交互的。
  • Schedule
      schedule負責調度Pod到合適的Node上,如果把scheduler看成一個黑匣子,那麼它的輸入是pod和由多個Node組成的列表,輸出是Pod和一個Node的綁定。 kubernetes目前提供了調度算法,同樣也保留了接口。用戶根據自己的需求定義自己的調度算法。
  • Controller manager
      如果APIServer做的是前臺的工作的話,那麼controller manager就是負責後臺的。每一個資源都對應一個控制器。而control manager就是負責管理這些控制器的,比如我們通過APIServer創建了一個Pod,當這個Pod創建成功後,APIServer的任務就算完成了。
  • ETCD
      etcd是一個高可用的鍵值存儲系統,kubernetes使用它來存儲各個資源的狀態,從而實現了Restful的API。
  • Flannel
      默認沒有flanneld網絡,Node節點間的pod不能通信,只能Node內通信,Flannel從etcd中獲取network的配置信息 劃分subnet,並在etcd中進行註冊 將子網信息記錄

kube-scheduler 和 kube-controller-manager 可以以集羣模式運行,通過 leader 選舉產生一個工作進程,其它進程處於阻塞模式。

  

6.2.1、下載文件並安裝 Kubernetes Server

[root@c0 ~]# cd /home/work/_src/
[root@c0 _src]# wget https://dl.k8s.io/v1.13.0/kubernetes-server-linux-amd64.tar.gz
[root@c0 _src]# tar -xzvf kubernetes-server-linux-amd64.tar.gz
[root@c0 _src]# cd kubernetes/server/bin/
[root@c0 bin]# cp kube-scheduler kube-apiserver kube-controller-manager kubectl kubelet kube-proxy /home/work/_app/k8s/kubernetes/bin/

  
  從 c0 複製 kubelet、kubectl、kube-proxy,同時複製到其他節點

[root@c0 ~]# cd /home/work/_src/kubernetes/server/bin/
[root@c0 bin]# for N in $(seq 1 3); do scp -r kubelet kubectl kube-proxy c$N:/home/work/_app/k8s/kubernetes/bin/; done;
kubelet                       100%  108MB 120.3MB/s   00:00
kubectl                       100%   37MB 120.0MB/s   00:00
kube-proxy                    100%   33MB 113.7MB/s   00:00
kubelet                       100%  108MB 108.0MB/s   00:00
kubectl                       100%   37MB 108.6MB/s   00:00
kube-proxy                    100%   33MB 106.1MB/s   00:00
kubelet                       100%  108MB 117.8MB/s   00:00
kubectl                       100%   37MB 116.6MB/s   00:00
kube-proxy                    100%   33MB 119.0MB/s   00:00

  

6.2.2、部署 Apiserver

  創建 TLS Bootstrapping Token

[root@c0 ~]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
4470210dbf9d9c57f8543bce4683c3ce

這裏我們生成的隨機Token是4470210dbf9d9c57f8543bce4683c3ce,記下來後面要用到

  創建 /home/work/_app/k8s/kubernetes/cfg/token-auth-file 文件並保存,內容如下:

[root@c0 ~]# cat /home/work/_app/k8s/kubernetes/cfg/token-auth-file
4470210dbf9d9c57f8543bce4683c3ce,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

  

6.2.2.1、創建 Apiserver 配置文件

  創建 /home/work/_app/k8s/kubernetes/cfg/kube-apiserver 文件並保存,內容如下:

[root@c0 ~]# cat /home/work/_app/k8s/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://10.0.0.100:2379,https://10.0.0.101:2379,https://10.0.0.102:2379,https://10.0.0.103:2379 \
--bind-address=10.0.0.100 \
--secure-port=6443 \
--advertise-address=10.0.0.100 \
--allow-privileged=true \
--service-cluster-ip-range=10.244.0.0/16 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/home/work/_app/k8s/kubernetes/cfg/token-auth-file \
--service-node-port-range=30000-50000 \
--tls-cert-file=/home/work/_app/k8s/kubernetes/ssl/kube-apiserver-server.pem  \
--tls-private-key-file=/home/work/_app/k8s/kubernetes/ssl/kube-apiserver-server-key.pem \
--client-ca-file=/home/work/_app/k8s/kubernetes/ssl/ca.pem \
--service-account-key-file=/home/work/_app/k8s/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/home/work/_app/k8s/etcd/ssl/ca.pem \
--etcd-certfile=/home/work/_app/k8s/etcd/ssl/server.pem \
--etcd-keyfile=/home/work/_app/k8s/etcd/ssl/server-key.pem"

  

6.2.2.2、創建 Apiserver 啓動文件

  創建 /usr/lib/systemd/system/kube-apiserver.service 文件並保存,內容如下:

[root@c0 ~]# cat /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/home/work/_app/k8s/kubernetes/cfg/kube-apiserver
ExecStart=/home/work/_app/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

  

6.2.2.3、啓動 Kube Apiserver 服務

[root@c0 ~]# systemctl daemon-reload && systemctl enable kube-apiserver && systemctl start kube-apiserver
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.

  

6.2.2.4、檢查 Apiserver 服務是否運行

[root@c0 ~]# systemctl status kube-apiserver
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-02-19 22:28:03 CST; 19s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 4708 (kube-apiserver)
    Tasks: 10
   Memory: 370.9M
   CGroup: /system.slice/kube-apiserver.service
           └─4708 /home/work/_app/k8s/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://10.0.0.100:2379,https://10.0.0.101:2379,https://10.0.0.102:2379,https://10.0.0.103:2379 --bind-address=10.0.0.100 ...

Feb 19 22:28:11 c0 kube-apiserver[4708]: I0219 22:28:11.510271    4708 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (2.032168ms) 200 [kube-api...10.0.0.100:59408]
Feb 19 22:28:11 c0 kube-apiserver[4708]: I0219 22:28:11.513149    4708 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (2.1516...10.0.0.100:59408]
Feb 19 22:28:11 c0 kube-apiserver[4708]: I0219 22:28:11.515603    4708 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.88011ms) 200 ...10.0.0.100:59408]
Feb 19 22:28:11 c0 kube-apiserver[4708]: I0219 22:28:11.518209    4708 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.980109ms) 200 [k...10.0.0.100:59408]
Feb 19 22:28:11 c0 kube-apiserver[4708]: I0219 22:28:11.520474    4708 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.890751ms) 200 [kub...10.0.0.100:59408]
Feb 19 22:28:11 c0 kube-apiserver[4708]: I0219 22:28:11.522918    4708 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.80026ms) 200 [kube-...10.0.0.100:59408]
Feb 19 22:28:11 c0 kube-apiserver[4708]: I0219 22:28:11.525952    4708 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (2.148966ms) 200 [k...10.0.0.100:59408]
Feb 19 22:28:20 c0 kube-apiserver[4708]: I0219 22:28:20.403713    4708 wrap.go:47] GET /api/v1/namespaces/default: (2.463889ms) 200 [kube-apiserver/v1.13.0 (linux/amd64) kubernetes/ddf47ac 10.0.0.100:59408]
Feb 19 22:28:20 c0 kube-apiserver[4708]: I0219 22:28:20.406610    4708 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (2.080766ms) 200 [kube-apiserver/v1.13.0 (linux/amd64) kubernetes/ddf47ac 10.0.0.100:59408]
Feb 19 22:28:20 c0 kube-apiserver[4708]: I0219 22:28:20.417019    4708 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.134397ms) 200 [kube-apiserver/v1.13.0 (linux/amd64) kubernetes/ddf47ac 10.0.0.100:59408]

  

6.2.3、部署 Scheduler

  創建 /home/work/_app/k8s/kubernetes/cfg/kube-scheduler 文件並保存,內容如下:

[root@c0 ~]# cat /home/work/_app/k8s/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"

  

6.2.3.1、創建 Kube-scheduler 系統啓動文件

  創建 /usr/lib/systemd/system/kube-scheduler.service 文件並保存,內容如下:

[root@c0 ~]# cat /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/home/work/_app/k8s/kubernetes/cfg/kube-scheduler
ExecStart=/home/work/_app/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

  

6.2.3.2、啓動 Kube-scheduler 服務

[root@c0 ~]# systemctl daemon-reload && systemctl enable kube-scheduler && systemctl start kube-scheduler
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.

  

6.2.3.3、檢查 Kube-scheduler 服務是否運行

[root@c0 ~]# systemctl status kube-scheduler
● kube-scheduler.service - Kubernetes Scheduler
   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-02-19 22:29:07 CST; 7s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 4839 (kube-scheduler)
    Tasks: 9
   Memory: 47.0M
   CGroup: /system.slice/kube-scheduler.service
           └─4839 /home/work/_app/k8s/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect

Feb 19 22:29:09 c0 kube-scheduler[4839]: I0219 22:29:09.679756    4839 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
Feb 19 22:29:09 c0 kube-scheduler[4839]: I0219 22:29:09.779894    4839 shared_informer.go:123] caches populated
Feb 19 22:29:09 c0 kube-scheduler[4839]: I0219 22:29:09.779928    4839 controller_utils.go:1034] Caches are synced for scheduler controller
Feb 19 22:29:09 c0 kube-scheduler[4839]: I0219 22:29:09.779990    4839 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-scheduler...
Feb 19 22:29:09 c0 kube-scheduler[4839]: I0219 22:29:09.784100    4839 leaderelection.go:289] lock is held by c0_ca2f3489-3316-11e9-87c6-001c42508c6a and has not yet expired
Feb 19 22:29:09 c0 kube-scheduler[4839]: I0219 22:29:09.784135    4839 leaderelection.go:210] failed to acquire lease kube-system/kube-scheduler
Feb 19 22:29:12 c0 kube-scheduler[4839]: I0219 22:29:12.829896    4839 leaderelection.go:289] lock is held by c0_ca2f3489-3316-11e9-87c6-001c42508c6a and has not yet expired
Feb 19 22:29:12 c0 kube-scheduler[4839]: I0219 22:29:12.829921    4839 leaderelection.go:210] failed to acquire lease kube-system/kube-scheduler
Feb 19 22:29:14 c0 kube-scheduler[4839]: I0219 22:29:14.941554    4839 leaderelection.go:289] lock is held by c0_ca2f3489-3316-11e9-87c6-001c42508c6a and has not yet expired
Feb 19 22:29:14 c0 kube-scheduler[4839]: I0219 22:29:14.941573    4839 leaderelection.go:210] failed to acquire lease kube-system/kube-scheduler

  

6.2.4、部署 Kube-Controller-Manager 組件

6.2.4.1、創建 kube-controller-manager 配置文件

  創建 /home/work/_app/k8s/kubernetes/cfg/kube-controller-manager 文件並保存,內容如下:

[root@c0 ~]# cat /home/work/_app/k8s/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.244.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/home/work/_app/k8s/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/home/work/_app/k8s/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/home/work/_app/k8s/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/home/work/_app/k8s/kubernetes/ssl/ca-key.pem"

  

6.2.4.2、創建 kube-controller-manager 系統啓動文件

  創建 /usr/lib/systemd/system/kube-controller-manager.service 文件並保存,內容如下

[root@c0 ~]# cat /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/home/work/_app/k8s/kubernetes/cfg/kube-controller-manager
ExecStart=/home/work/_app/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

  

6.2.4.3、啓動 kube-controller-manager 服務

[root@c0 ~]# systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl start kube-controller-manager
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.

  

6.2.4.4、檢查 kube-controller-manager 服務是否運行

[root@c0 ~]# systemctl status kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-02-19 22:29:40 CST; 12s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 4933 (kube-controller)
    Tasks: 7
   Memory: 106.7M
   CGroup: /system.slice/kube-controller-manager.service
           └─4933 /home/work/_app/k8s/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.244.0.0/16 --cluster-name=kubernet...

Feb 19 22:29:41 c0 kube-controller-manager[4933]: I0219 22:29:41.276841    4933 deprecated_insecure_serving.go:51] Serving insecurely on 127.0.0.1:10252
Feb 19 22:29:41 c0 kube-controller-manager[4933]: I0219 22:29:41.278183    4933 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-controller-manager...
Feb 19 22:29:41 c0 kube-controller-manager[4933]: I0219 22:29:41.301326    4933 leaderelection.go:289] lock is held by c0_cae65875-3316-11e9-9071-001c42508c6a and has not yet expired
Feb 19 22:29:41 c0 kube-controller-manager[4933]: I0219 22:29:41.301451    4933 leaderelection.go:210] failed to acquire lease kube-system/kube-controller-manager
Feb 19 22:29:44 c0 kube-controller-manager[4933]: I0219 22:29:44.679518    4933 leaderelection.go:289] lock is held by c0_cae65875-3316-11e9-9071-001c42508c6a and has not yet expired
Feb 19 22:29:44 c0 kube-controller-manager[4933]: I0219 22:29:44.679550    4933 leaderelection.go:210] failed to acquire lease kube-system/kube-controller-manager
Feb 19 22:29:47 c0 kube-controller-manager[4933]: I0219 22:29:47.078743    4933 leaderelection.go:289] lock is held by c0_cae65875-3316-11e9-9071-001c42508c6a and has not yet expired
Feb 19 22:29:47 c0 kube-controller-manager[4933]: I0219 22:29:47.078762    4933 leaderelection.go:210] failed to acquire lease kube-system/kube-controller-manager
Feb 19 22:29:49 c0 kube-controller-manager[4933]: I0219 22:29:49.529247    4933 leaderelection.go:289] lock is held by c0_cae65875-3316-11e9-9071-001c42508c6a and has not yet expired
Feb 19 22:29:49 c0 kube-controller-manager[4933]: I0219 22:29:49.529266    4933 leaderelection.go:210] failed to acquire lease kube-system/kube-controller-manager

  

6.2.5、驗證 API Server 服務

  將 kubectl 加入到$PATH變量中

[root@c0 ~]# echo "PATH=/home/work/_app/k8s/kubernetes/bin:$PATH:$HOME/bin" >> /etc/profile
[root@c0 ~]# source /etc/profile

  
  查看節點狀態

[root@c0 ~]# kubectl get cs,nodes
NAME                                 STATUS    MESSAGE             ERROR
componentstatus/controller-manager   Healthy   ok
componentstatus/scheduler            Healthy   ok
componentstatus/etcd-2               Healthy   {"health":"true"}
componentstatus/etcd-1               Healthy   {"health":"true"}
componentstatus/etcd-0               Healthy   {"health":"true"}
componentstatus/etcd-3               Healthy   {"health":"true"}

  

6.2.6、部署 Kubelet

6.2.6.1、創建 bootstrap.kubeconfig、kube-proxy.kubeconfig 配置文件

  創建 /home/work/_app/k8s/kubernetes/cfg/env.sh 文件並保存,內容如下:

[root@c0 cfg]# pwd
/home/work/_app/k8s/kubernetes/cfg
[root@c0 cfg]# cat env.sh
#!/bin/bash
#創建kubelet bootstrapping kubeconfig 
BOOTSTRAP_TOKEN=4470210dbf9d9c57f8543bce4683c3ce
KUBE_APISERVER="https://10.0.0.100:6443"
#設置集羣參數
kubectl config set-cluster kubernetes \
  --certificate-authority=/home/work/_app/k8s/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig
 
#設置客戶端認證參數
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig
 
# 設置上下文參數
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig
 
# 設置默認上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
 
#----------------------
 
# 創建kube-proxy kubeconfig文件
 
kubectl config set-cluster kubernetes \
  --certificate-authority=/home/work/_app/k8s/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config set-credentials kube-proxy \
  --client-certificate=/home/work/_app/k8s/kubernetes/ssl/kube-proxy-client.pem \
  --client-key=/home/work/_app/k8s/kubernetes/ssl/kube-proxy-client-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

BOOTSTRAP_TOKEN使用在創建 TLS Bootstrapping Token 生成的4470210dbf9d9c57f8543bce4683c3ce

  
  執行腳本:

[root@c0 cfg]# pwd
/home/work/_app/k8s/kubernetes/cfg
[root@c0 cfg]# sh env.sh
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" modified.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".
[root@c0 cfg]# ls
bootstrap.kubeconfig  env.sh  flanneld  kube-apiserver  kube-controller-manager  kube-proxy.kubeconfig  kube-scheduler  token.csv

  
  將 bootstrap.kubeconfigkube-proxy.kubeconfig 複製到其他節點

[root@c0 cfg]# for N in $(seq 1 3); do scp -r kube-proxy.kubeconfig bootstrap.kubeconfig c$N:/home/work/_app/k8s/kubernetes/cfg/; done;
kube-proxy.kubeconfig                                100% 6294    10.2MB/s   00:00
bootstrap.kubeconfig                                 100% 2176     4.2MB/s   00:00
kube-proxy.kubeconfig                                100% 6294    10.8MB/s   00:00
bootstrap.kubeconfig                                 100% 2176     3.3MB/s   00:00
kube-proxy.kubeconfig                                100% 6294     9.6MB/s   00:00
bootstrap.kubeconfig                                 100% 2176     2.3MB/s   00:00

  

6.2.6.2、創建 kubelet 配置文件

  創建 /home/work/_app/k8s/kubernetes/cfg/kubelet.config 參數配置文件並保存,內容如下:

[root@c0 cfg]# cat
/home/work/_app/k8s/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 10.0.0.100
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.244.0.1"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true

  
  創建 /home/work/_app/k8s/kubernetes/cfg/kubelet 啓動參數文件並保存,內容如下:

[root@c0 cfg]# cat
/home/work/_app/k8s/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=10.0.0.100 \
--kubeconfig=/home/work/_app/k8s/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/home/work/_app/k8s/kubernetes/cfg/bootstrap.kubeconfig \
--config=/home/work/_app/k8s/kubernetes/cfg/kubelet.config \
--cert-dir=/home/work/_app/k8s/kubernetes/ssl_cert \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

kubelet 啓動時,如果通過 --kubeconfig 指定的文件不存在,則通過 --bootstrap-kubeconfig 指定的 bootstrap kubeconfig 用於從API服務器請求客戶端證書。
在通過 kubelet 批准證書請求時,引用生成的密鑰和證書將放在 --cert-dir 目錄中。

  

6.2.6.3、將 kubelet-bootstrap 用戶綁定到系統集羣角色

[root@c0 ~]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created

  

6.2.6.4、創建 kubelet 系統啓動文件

  創建 /usr/lib/systemd/system/kubelet.service 並保存,內容如下:

[root@c0 cfg]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
 
[Service]
EnvironmentFile=/home/work/_app/k8s/kubernetes/cfg/kubelet
ExecStart=/home/work/_app/k8s/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process
 
[Install]
WantedBy=multi-user.target

  

6.2.6.5、啓動 kubelet 服務

[root@c0 cfg]# systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

  

6.2.6.6、查看 kubelet 服務運行狀態

[root@c0 cfg]# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-02-19 22:31:23 CST; 14s ago
 Main PID: 5137 (kubelet)
    Tasks: 13
   Memory: 128.7M
   CGroup: /system.slice/kubelet.service
           └─5137 /home/work/_app/k8s/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=10.0.0.100 --kubeconfig=/home/work/_app/k8s/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/home/work/_app/k8s/kub...

Feb 19 22:31:34 c0 kubelet[5137]: I0219 22:31:34.488086    5137 eviction_manager.go:226] eviction manager: synchronize housekeeping
Feb 19 22:31:34 c0 kubelet[5137]: I0219 22:31:34.502001    5137 helpers.go:836] eviction manager: observations: signal=imagefs.inodesFree, available: 107287687, capacity: 107374144, time: 2019-02-19 22:31:34.48876...T m=+10.738964114
Feb 19 22:31:34 c0 kubelet[5137]: I0219 22:31:34.502103    5137 helpers.go:836] eviction manager: observations: signal=pid.available, available: 32554, capacity: 32Ki, time: 2019-02-19 22:31:34.50073593 +0800 CST m=+10.750931769
Feb 19 22:31:34 c0 kubelet[5137]: I0219 22:31:34.502132    5137 helpers.go:836] eviction manager: observations: signal=memory.available, available: 2179016Ki, capacity: 2819280Ki, time: 2019-02-19 22:31:34.4887683...T m=+10.738964114
Feb 19 22:31:34 c0 kubelet[5137]: I0219 22:31:34.502143    5137 helpers.go:836] eviction manager: observations: signal=allocatableMemory.available, available: 2819280Ki, capacity: 2819280Ki, time: 2019-02-19 22:31...T m=+10.751961068
Feb 19 22:31:34 c0 kubelet[5137]: I0219 22:31:34.502151    5137 helpers.go:836] eviction manager: observations: signal=nodefs.available, available: 1068393320Ki, capacity: 1048064Mi, time: 2019-02-19 22:31:34.4887...T m=+10.738964114
Feb 19 22:31:34 c0 kubelet[5137]: I0219 22:31:34.502161    5137 helpers.go:836] eviction manager: observations: signal=nodefs.inodesFree, available: 107287687, capacity: 107374144, time: 2019-02-19 22:31:34.488768...T m=+10.738964114
Feb 19 22:31:34 c0 kubelet[5137]: I0219 22:31:34.502170    5137 helpers.go:836] eviction manager: observations: signal=imagefs.available, available: 1068393320Ki, capacity: 1048064Mi, time: 2019-02-19 22:31:34.488...T m=+10.738964114
Feb 19 22:31:34 c0 kubelet[5137]: I0219 22:31:34.502191    5137 eviction_manager.go:317] eviction manager: no resources are starved
Feb 19 22:31:36 c0 kubelet[5137]: I0219 22:31:36.104200    5137 kubelet.go:1995] SyncLoop (housekeeping)

  

6.2.7、批准 Master 加入集羣

  CSR 可以在內置批准流程之外做手動批准加入集羣。
  管理員也可以使用 kubectl 手動批准證書請求。
  管理員可以使用 kubectl get csr 列出 CSR 請求, 並使用 kubectl describe csr <name> 列出詳細描述。
  管理員也可以使用 kubectl certificate approve <name>kubectl certificate deny <name> 工具批准或拒絕 CSR 請求。
  

6.2.7.1、查看 CSR 列表

[root@c0 cfg]# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-9TZm3EM65hqCTbfYMA46NVtQL4tW06loq01YfKQsm8k   14m   kubelet-bootstrap   Pending

  

6.2.7.2、批准加入集羣

[root@c0 cfg]# kubectl certificate approve node-csr-9TZm3EM65hqCTbfYMA46NVtQL4tW06loq01YfKQsm8k
certificatesigningrequest.certificates.k8s.io/node-csr-9TZm3EM65hqCTbfYMA46NVtQL4tW06loq01YfKQsm8k approved

  

6.2.7.3、驗證 Master 是否加入集羣

  再次查看 CSR 列表

[root@c0 cfg]# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-9TZm3EM65hqCTbfYMA46NVtQL4tW06loq01YfKQsm8k   15m   kubelet-bootstrap   Approved,Issued

  

6.3、部署 kube-proxy 組件

  kube-proxy 運行在所有 Node 節點上,它監聽 apiserverserviceEndpoint 的變化情況,創建路由規則來進行服務負載均衡,以下操作以 c0 爲例
  

6.3.1、創建 kube-proxy 參數配置文件

  創建 /home/work/_app/k8s/kubernetes/cfg/kube-proxy 配置文件並保存,內容如下:

[root@c0 ~]# cat /home/work/_app/k8s/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=10.0.0.100 \
--cluster-cidr=10.244.0.0/16 \
--kubeconfig=/home/work/_app/k8s/kubernetes/cfg/kube-proxy.kubeconfig"

--hostname-override在不同的節點處,要換成節點的IP

  

6.3.2、創建 kube-proxy 系統啓動文件

  創建 /usr/lib/systemd/system/kube-proxy.service 文件並保存,內容如下:

[root@c0 ~]# cat /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target
 
[Service]
EnvironmentFile=-/home/work/_app/k8s/kubernetes/cfg/kube-proxy
ExecStart=/home/work/_app/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target 

  

6.3.3、啓動 kube-proxy 服務

[root@c0 ~]# systemctl daemon-reload && systemctl enable kube-proxy &&  systemctl start kube-proxy
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.

  

6.3.4、檢查 kube-proxy 服務狀態

[root@c0 cfg]# systemctl status kube-proxy
● kube-proxy.service - Kubernetes Proxy
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2019-02-18 06:08:51 CST; 3h 49min ago
 Main PID: 12660 (kube-proxy)
    Tasks: 0
   Memory: 1.9M
   CGroup: /system.slice/kube-proxy.service
           ‣ 12660 /home/work/_app/k8s/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=10.0.0.100 --cluster-cidr=10.244.0.0/16 --kubeconfig=/home/work/_app/k8s/kubernetes/cfg/...

Feb 18 09:58:38 c0 kube-proxy[12660]: I0218 09:58:38.205387   12660 config.go:141] Calling handler.OnEndpointsUpdate
Feb 18 09:58:38 c0 kube-proxy[12660]: I0218 09:58:38.250931   12660 config.go:141] Calling handler.OnEndpointsUpdate
Feb 18 09:58:40 c0 kube-proxy[12660]: I0218 09:58:40.249487   12660 config.go:141] Calling handler.OnEndpointsUpdate
Feb 18 09:58:40 c0 kube-proxy[12660]: I0218 09:58:40.290336   12660 config.go:141] Calling handler.OnEndpointsUpdate
Feb 18 09:58:42 c0 kube-proxy[12660]: I0218 09:58:42.264320   12660 config.go:141] Calling handler.OnEndpointsUpdate
Feb 18 09:58:42 c0 kube-proxy[12660]: I0218 09:58:42.318954   12660 config.go:141] Calling handler.OnEndpointsUpdate
Feb 18 09:58:44 c0 kube-proxy[12660]: I0218 09:58:44.273290   12660 config.go:141] Calling handler.OnEndpointsUpdate
Feb 18 09:58:44 c0 kube-proxy[12660]: I0218 09:58:44.359236   12660 config.go:141] Calling handler.OnEndpointsUpdate
Feb 18 09:58:46 c0 kube-proxy[12660]: I0218 09:58:46.287980   12660 config.go:141] Calling handler.OnEndpointsUpdate
Feb 18 09:58:46 c0 kube-proxy[12660]: I0218 09:58:46.377475   12660 config.go:141] Calling handler.OnEndpointsUpdate

  

6.4、驗證 Server 服務

  查看 Master 狀態

[root@c0 cfg]# kubectl get cs,nodes
NAME                                 STATUS    MESSAGE             ERROR
componentstatus/scheduler            Healthy   ok
componentstatus/controller-manager   Healthy   ok
componentstatus/etcd-0               Healthy   {"health":"true"}
componentstatus/etcd-1               Healthy   {"health":"true"}
componentstatus/etcd-3               Healthy   {"health":"true"}
componentstatus/etcd-2               Healthy   {"health":"true"}

NAME              STATUS   ROLES    AGE   VERSION
node/10.0.0.100   Ready    <none>   51m   v1.13.0

  

6.5、Kubernetes Node 節點加入集羣

  Kubernetes Node 節點運行如下組件:

  • Proxy:
      該模塊實現了kubernetes中的服務發現和反向代理功能。kube-proxy支持TCP和UDP連接轉發,默認基Round Robin算法將客戶端流量轉發到與service對應的一組後端pod。服務發現方面,kube-proxy使用etcd的watch機制監控集羣中service和endpoint對象數據的動態變化,並且維護一個service到endpoint的映射關係,從而保證了後端pod的IP變化不會對訪問者造成影響,另外,kube-proxy還支持session affinity。
  • Kublet
      kublet是Master在每個Node節點上面的agent,是Node節點上面最重要的模塊,它負責維護和管理該Node上的所有容器,但是如果容器不是通過kubernetes創建的,它並不會管理。本質上,它負責使Pod的運行狀態與期望的狀態一致。
    kublet 啓動時自動向 kube-apiserver 註冊節點信息,內置的 cadvisor 統計和監控節點的資源使用情況; 爲確保安全,只開啓接收 https 請求的安全端口,對請求進行認證和授權,拒絕未授權的訪問(如apiserver、heapster)
  • Flannel
      默認沒有flanneld網絡,Node節點間的pod不能通信,只能Node內通信,Flannel從etcd中獲取network的配置信息 劃分subnet,並在etcd中進行註冊 將子網信息記錄
  • ETCD
      ETCD是一個高可用的鍵值存儲系統,kubernetes使用它來存儲各個資源的狀態,從而實現了Restful的API。

  

6.5.1、創建 kubelet 配置文件

  在所有節點上都要運行,以 c1 爲例。
  創建 /home/work/_app/k8s/kubernetes/cfg/kubelet.config 參數配置文件並保存,內容如下:

[root@c1 ~]# cat /home/work/_app/k8s/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 10.0.0.101
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.244.0.1"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true

address在不同的節點處,要改成節點的IP

  
  創建 /home/work/_app/k8s/kubernetes/cfg/kubelet 啓動參數文件並保存,內容如下:

[root@c1 ~]# cat /home/work/_app/k8s/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=10.0.0.101 \
--kubeconfig=/home/work/_app/k8s/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/home/work/_app/k8s/kubernetes/cfg/bootstrap.kubeconfig \
--config=/home/work/_app/k8s/kubernetes/cfg/kubelet.config \
--cert-dir=/home/work/_app/k8s/kubernetes/ssl_cert \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

  

6.5.2、創建 kubelet 系統啓動文件

  創建 /usr/lib/systemd/system/kubelet.service 並保存,內容如下:

[root@c1 ~]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
 
[Service]
EnvironmentFile=/home/work/_app/k8s/kubernetes/cfg/kubelet
ExecStart=/home/work/_app/k8s/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process
 
[Install]
WantedBy=multi-user.target

  

6.5.3、啓動 kubelet 服務

[root@c1 ~]# systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

  

6.5.4、查看 kubelet 服務運行狀態

[root@c1 ~]# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2019-02-18 06:27:54 CST; 6s ago
 Main PID: 19123 (kubelet)
    Tasks: 12
   Memory: 18.3M
   CGroup: /system.slice/kubelet.service
           └─19123 /home/work/_app/k8s/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=10.0.0.101 --kubeconfig=/home/work/_app/k8s/kubernetes/cfg/kubelet.kubeconfig --bootstrap-k...

Feb 18 06:27:54 c1 kubelet[19123]: I0218 06:27:54.784286   19123 mount_linux.go:179] Detected OS with systemd
Feb 18 06:27:54 c1 kubelet[19123]: I0218 06:27:54.784416   19123 server.go:407] Version: v1.13.0

  

6.5.5、批准 Node 加入集羣

  查看 CSR 列表,可以看到節點有 Pending 請求

[root@c0 cfg]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-9TZm3EM65hqCTbfYMA46NVtQL4tW06loq01YfKQsm8k   84m     kubelet-bootstrap   Approved,Issued
node-csr-W7DXhrhm4Xhrpr5Qz2Br1dW-3t4VuYORSajbTFrNiqA   2m45s   kubelet-bootstrap   Pending

  
  通過以下命令,查看請求的詳細信息,能夠看到是 c1 的IP地址10.0.0.101發來的請求

[root@c0 cfg]# kubectl describe csr node-csr-W7DXhrhm4Xhrpr5Qz2Br1dW-3t4VuYORSajbTFrNiqA
Name:               node-csr-W7DXhrhm4Xhrpr5Qz2Br1dW-3t4VuYORSajbTFrNiqA
Labels:             <none>
Annotations:        <none>
CreationTimestamp:  Mon, 18 Feb 2019 06:26:08 +0800
Requesting User:    kubelet-bootstrap
Status:             Pending
Subject:
         Common Name:    system:node:10.0.0.101
         Serial Number:
         Organization:   system:nodes
Events:  <none>

  
  批准加入集羣

[root@c0 cfg]# kubectl certificate approve node-csr-W7DXhrhm4Xhrpr5Qz2Br1dW-3t4VuYORSajbTFrNiqA
certificatesigningrequest.certificates.k8s.io/node-csr-W7DXhrhm4Xhrpr5Qz2Br1dW-3t4VuYORSajbTFrNiqA approved

  
  再次查看 CSR 列表,可以看到節點的加入請求已經被批准

[root@c0 cfg]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-9TZm3EM65hqCTbfYMA46NVtQL4tW06loq01YfKQsm8k   88m     kubelet-bootstrap   Approved,Issued
node-csr-W7DXhrhm4Xhrpr5Qz2Br1dW-3t4VuYORSajbTFrNiqA   6m57s   kubelet-bootstrap   Approved,Issued

  

6.5.6、從集羣刪除 Node

  要刪除一個節點前,要先清除掉上面的 pod
  然後運行下面的命令刪除節點

kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>

  
  如果想要有效刪除節點,在節點啓動時,重新向集羣發送 CSR 請求,還需要在被刪除的點節上,刪除 CSR 緩存數據

[root@c1 ~]# ls /home/work/_app/k8s/kubernetes/ssl_cert/
kubelet-client-2019-02-19-23-20-05.pem  kubelet-client-current.pem  kubelet.crt  kubelet.key
[root@c1 ~]# rm -rf //home/work/_app/k8s/kubernetes/ssl_cert/*

  
  刪除完 CSR 緩存數據以後,重啓啓動 kubelet 就可以在 Master 上收到新的 CSR 請求。
  

6.5.7、給 Node 打標籤

  
  查看所有節點狀態

[root@c0 ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
10.0.0.100   Ready    <none>   96m   v1.13.0
10.0.0.101   Ready    <none>   24m   v1.13.0

  
  c0Master 打標籤

[root@c0 ~]# kubectl label node 10.0.0.100 node-role.kubernetes.io/master='master'
node/10.0.0.100 labeled

  
  c1Node 打標籤

[root@c0 ~]# kubectl label node 10.0.0.101 node-role.kubernetes.io/master='node-c1'
node/10.0.0.101 labeled
[root@c0 ~]# kubectl label node 10.0.0.101 node-role.kubernetes.io/node='node-c1'
node/10.0.0.101 labeled
[root@c0 ~]# kubectl get node
NAME         STATUS   ROLES         AGE    VERSION
10.0.0.100   Ready    master        106m   v1.13.0
10.0.0.101   Ready    master,node   33m    v1.13.0

  
  刪除掉 c1 上的 master 標籤

[root@c0 ~]# kubectl label node 10.0.0.101 node-role.kubernetes.io/master-
node/10.0.0.101 labeled
[root@c0 cfg]# kubectl get node
NAME         STATUS   ROLES    AGE    VERSION
10.0.0.100   Ready    master   108m   v1.13.0
10.0.0.101   Ready    node     35m    v1.13.0

  

7、參考文章

  Linux7/Centos7 Selinux介紹
  Kubernetes網絡原理及方案
  Installing a Kubernetes Cluster on CentOS 7
  How to install Kubernetes(k8) in RHEL or Centos in just 7 steps
  docker-kubernetes-tls-guide
  kubernetes1.13.1+etcd3.3.10+flanneld0.10集羣部署
  

8、常見問題

用虛擬機如何生成新的網卡UUID?

  例如我是在Parallels上安裝的一個 c1 ,克隆 c2 後,根據本文上面的內容可以更改IP,UUID如果要更改,可以使用以下命令查看網卡的UUID:

[root@c2 ~]# uuidgen eth0
6ea1a665-0126-456c-80c7-1f69f32e83b7

  


博文作者:迦壹
博客地址:Centos7 二進制安裝 Kubernetes 1.13
轉載聲明:可以轉載, 但必須以超鏈接形式標明文章原始出處和作者信息及版權聲明,謝謝合作!
  
假設您認爲這篇文章對您有幫助,可以通過以下方式進行捐贈,謝謝!

比特幣地址:1KdgydfKMcFVpicj5w4vyn3T88dwjBst6Y
以太坊地址:0xbB0a92d634D7b9Ac69079ed0e521CC2e0a97c420


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章