Centos7 使用 kubeadm 安裝Kubernetes 1.13.3

目錄

目錄

什麼是Kubeadm?

  大多數與 Kubernetes 的工程師,都應該會使用 kubeadm。它是管理集羣生命週期的重要工具,從創建到配置再到升級; kubeadm 處理現有硬件上的生產集羣的引導,並以最佳實踐方式配置核心 Kubernetes 組件,以便爲新節點提供安全而簡單的連接流程並支持輕鬆升級。

  在Kubernetes 的文檔Creating a single master cluster with kubeadm中已經給出了目前kubeadm的主要特性已經處於 Beta 狀態了,在 2018 年就會轉換成正式發佈 (GA) 狀態態,說明 kubeadm 離可以在生產環境中使用的距離越來越近了。
  

什麼是容器存儲接口(CSI)?

  容器存儲接口最初於 1.9 版本中作爲 alpha 測試功能引入,在 1.10 版本中進入 beta 測試,如今終於進入 GA 階段正式普遍可用。在 CSI 的幫助下,Kubernetes 卷層將真正實現可擴展性。通過 CSI ,第三方存儲供應商將可以直接編寫可與 Kubernetes 互操作的代碼,而無需觸及任何 Kubernetes 核心代碼。事實上,相關規範也已經同步進入 1.0 階段。
  

什麼是CoreDNS?

  在1.11中,官方宣佈 CoreDNS 已達到基於DNS的服務發現的一般可用性。在1.13中,CoreDNS 現在將 kube-dns 替換爲 Kubernetes 的默認DNS服務器。CoreDNS 是一個通用的,權威的DNS服務器,提供與 Kubernetes 向後兼容但可擴展的集成。CoreDNS 比以前的DNS服務器具有更少的移動部件,因爲它是單個可執行文件和單個進程,並通過創建自定義DNS條目來支持靈活的用例。它也用Go編寫,使其具有內存安全性。
  

1、環境準備

  本文中的案例會有四臺機器,他們的Host和IP地址如下

IP地址 主機名
10.0.0.100 c0(master)
10.0.0.101 c1
10.0.0.102 c2
10.0.0.103 c3

  
  每一臺機器的 hostc0 爲例:

[root@c0 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.0.100 c0
10.0.0.101 c1
10.0.0.102 c2
10.0.0.103 c3

  

1.1、網絡配置

  每一臺機器上都要操作,以下以c0爲例

[root@c0 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=eth0
UUID=6d8d9ad6-37b5-431a-ab16-47d0aa00d01f
DEVICE=eth0
ONBOOT=yes
IPADDR0=10.0.0.100
PREFIXO0=24
GATEWAY0=10.0.0.1
DNS1=10.0.0.1
DNS2=8.8.8.8

  
  重啓網絡:

[root@c0 ~]# service network restart

  
  更改源爲阿里雲

[root@c0 ~]# yum install -y wget
[root@c0 ~]# cd /etc/yum.repos.d/
[root@c0 yum.repos.d]# mv CentOS-Base.repo CentOS-Base.repo.bak
[root@c0 yum.repos.d]# wget http://mirrors.aliyun.com/repo/Centos-7.repo
[root@c0 yum.repos.d]# wget http://mirrors.163.com/.help/CentOS7-Base-163.repo
[root@c0 yum.repos.d]# yum clean all
[root@c0 yum.repos.d]# yum makecache

  
  安裝網絡工具包和基礎工具包

[root@c0 ~]# yum install net-tools checkpolicy gcc dkms foomatic openssh-server bash-completion -y

  

1.2、更改 hostname

  每一臺機器上依次設置 hostname,以下以 c0 爲例

[root@c0 ~]# hostnamectl --static set-hostname c0
[root@c0 ~]# hostnamectl status
   Static hostname: c0
         Icon name: computer-vm
           Chassis: vm
        Machine ID: 04c3f6d56e788345859875d9f49bd4bd
           Boot ID: ba02919abe4245aba673aaf5f778ad10
    Virtualization: kvm
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 3.10.0-957.el7.x86_64
      Architecture: x86-64

  

1.3、配置 SSH 免密碼登錄登錄

  每一臺機器都單獨生成,以 c0 爲例

[root@c0 ~]# ssh-keygen
#一路按回車到最後

  
  將 ssh-keygen 生成的密鑰,分別複製到其他三臺機器,以下以 c0 爲例

[root@c0 ~]# ssh-copy-id c0
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'c0 (10.0.0.100)' can't be established.
ECDSA key fingerprint is SHA256:O8y8TBSZfBYiHPvJPPuAd058zkfsOfnBjvnf/3cvOCQ.
ECDSA key fingerprint is MD5:da:3c:29:65:f2:86:e9:61:cb:39:57:5b:5e:e2:77:7c.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c0's password:
[root@c0 ~]# rm -rf ~/.ssh/known_hosts
[root@c0 ~]# clear
[root@c0 ~]# ssh-copy-id c0
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'c0 (10.0.0.100)' can't be established.
ECDSA key fingerprint is SHA256:O8y8TBSZfBYiHPvJPPuAd058zkfsOfnBjvnf/3cvOCQ.
ECDSA key fingerprint is MD5:da:3c:29:65:f2:86:e9:61:cb:39:57:5b:5e:e2:77:7c.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c0's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'c0'"
and check to make sure that only the key(s) you wanted were added.

[root@c0 ~]# ssh-copy-id c1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'c1 (10.0.0.101)' can't be established.
ECDSA key fingerprint is SHA256:O8y8TBSZfBYiHPvJPPuAd058zkfsOfnBjvnf/3cvOCQ.
ECDSA key fingerprint is MD5:da:3c:29:65:f2:86:e9:61:cb:39:57:5b:5e:e2:77:7c.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c1's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'c1'"
and check to make sure that only the key(s) you wanted were added.

[root@c0 ~]# ssh-copy-id c2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'c2 (10.0.0.102)' can't be established.
ECDSA key fingerprint is SHA256:O8y8TBSZfBYiHPvJPPuAd058zkfsOfnBjvnf/3cvOCQ.
ECDSA key fingerprint is MD5:da:3c:29:65:f2:86:e9:61:cb:39:57:5b:5e:e2:77:7c.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c2's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'c2'"
and check to make sure that only the key(s) you wanted were added.

[root@c0 ~]# ssh-copy-id c3
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'c3 (10.0.0.103)' can't be established.
ECDSA key fingerprint is SHA256:O8y8TBSZfBYiHPvJPPuAd058zkfsOfnBjvnf/3cvOCQ.
ECDSA key fingerprint is MD5:da:3c:29:65:f2:86:e9:61:cb:39:57:5b:5e:e2:77:7c.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c3's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'c3'"
and check to make sure that only the key(s) you wanted were added.

  
  測試密鑰是否配置成功

[root@c0 ~]# for N in $(seq 0 3); do ssh c$N hostname; done;
c0
c1
c2
c3

  

1.4、關閉防火牆

  
  在每一臺機器上運行以下命令,以 c0 爲例:

[root@c0 ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

  

1.5、關閉交換分區

  在每一臺機器上運行以下命令,以 c0 爲例

[root@c0 ~]# swapoff -a

關閉前和關閉後,可以使用free -h命令查看swap的狀態,關閉後的total應該是0

  
  編輯配置文件: /etc/fstab ,註釋最後一條 /dev/mapper/centos-swap swap,以 c0 爲例

[root@c0 ~]# sed -i "s/\/dev\/mapper\/centos-swap/# \/dev\/mapper\/centos-swap/" /etc/fstab
[root@c1 ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Mon Jan 28 11:49:11 2019
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=93572ab6-90da-4cfe-83a4-93be7ad8597c /boot                   xfs     defaults        0 0
# /dev/mapper/centos-swap swap                    swap    defaults        0 0

  

1.6、關閉 SeLinux

  在每一臺機器上,關閉 SeLinux,以 c0 爲例

[root@c0 ~]# setenforce 0
setenforce: SELinux is disabled
[root@c0 ~]# sed -i "s/SELINUX=enforcing/SELINUX=permissive/" /etc/selinux/config
[root@c0 ~]# cat /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=permissive
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

SELinux就是安全加強的Linux,通過命令 setenforce 0sed ... 可以將 SELinux 設置爲 permissive 模式(將其禁用)。 只有執行這一操作之後,容器才能訪問宿主的文件系統,進而能夠正常使用 Pod 網絡。您必須這麼做,直到 kubelet 做出升級支持 SELinux 爲止。

  

1.7、配置 IPTABLES

  在每一臺機器上操作,以 c0 爲例

[root@c0 ~]# cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@c0 ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
* Applying /etc/sysctl.conf ...

一些 RHEL/CentOS 7 的用戶曾經遇到過:由於 iptables 被繞過導致網絡請求被錯誤的路由。您得保證在您的 sysctl 配置中 net.bridge.bridge-nf-call-iptables 被設爲1

  

1.8、安裝 NTP

  在每一臺機器上,安裝 NTP 時間同步工具,並啓動 NTP

[root@c0 ~]# yum install ntp -y

  
  設置 NTP 開機啓動,同時啓動 NTP

[root@c0 ~]# systemctl enable ntpd && systemctl start ntpd

  
  依次查看每臺機器上的時間:

[root@c0 ~]# for N in $(seq 0 3); do ssh c$N date; done;
Sat Feb  9 18:11:48 CST 2019
Sat Feb  9 18:11:48 CST 2019
Sat Feb  9 18:11:49 CST 2019
Sat Feb  9 18:11:49 CST 2019

  

1.9、升級內核

  因爲 3.10 版本內核且缺少 ip_vs_fo.ko 模塊,將導致 kube-proxy 無法開啓 ipvs 模式。ip_vs_fo.ko 模塊的最早版本爲 3.19 版本,這個內核版本在 RedHat 系列發行版的常見RPM源中是不存在的。

  在每一臺機器上操作,以 c0 爲例

[root@c0 ~]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
[root@c0 ~]# yum --enablerepo=elrepo-kernel install kernel-ml-devel kernel-ml -y

  
  重啓系統 reboot 後,手動選擇新內核,然後輸入以下命令,可以查看新內核的狀態:

[root@c0 ~]# hostnamectl
   Static hostname: c0
         Icon name: computer-vm
           Chassis: vm
        Machine ID: 04c3f6d56e788345859875d9f49bd4bd
           Boot ID: 40a19388698f4907bd233a8cff76f36e
    Virtualization: kvm
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 4.20.7-1.el7.elrepo.x86_64
      Architecture: x86-64

  

2、安裝 Docker 18.06.1-ce

2.1、刪除舊版本的 Docker

  官方提供的刪除方法

$ sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine

  
  另外一種刪除舊版的 Docker 方法,先查詢安裝過的 Docker

[root@c0 ~]# yum list installed | grep docker
Repository base is listed more than once in the configuration
Repository updates is listed more than once in the configuration
Repository extras is listed more than once in the configuration
Repository centosplus is listed more than once in the configuration
containerd.io.x86_64            1.2.2-3.el7                    @docker-ce-stable
docker-ce.x86_64                3:18.09.1-3.el7                @docker-ce-stable
docker-ce-cli.x86_64            1:18.09.1-3.el7                @docker-ce-stable

  
  刪除已安裝的 Docker

[root@c0 ~]# yum -y remove docker-ce.x86_64 docker-ce-cli.x86_64 containerd.io.x86_64

  
  刪除 Docker 鏡像/容器

[root@c0 ~]# rm -rf /var/lib/docker

  

2.2、設置存儲庫

  安裝所需要的包,yum-utils 提供了 yum-config-manager 實用程序, device-mapper-persistent-datalvm2devicemapper 需要的存儲驅動程序。

  在每一臺機器上操作,以 c0 爲例

[root@c0 ~]# sudo yum install -y yum-utils device-mapper-persistent-data lvm2
[root@c0 ~]# sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

  

2.3、安裝 Docker

[root@c0 ~]# sudo yum install docker-ce-18.06.1.ce-3.el7 -y

  

2.4、啓動 Docker

[root@c0 ~]# systemctl enable docker && systemctl start docker

  

3、確保每個節點上 MAC 地址和 product_uuid 的唯一性

  • 您可以使用下列命令獲取網絡接口的 MAC 地址:ip link 或是 ifconfig -a

  • 可以通過命令 cat product_uuid sudo cat /sys/class/dmi/id/product_uuiddmidecode -s system-uuid 來查看

  一般來講,硬件設備會擁有獨一無二的地址,但是有些虛擬機可能會雷同。Kubernetes 使用這些值來唯一確定集羣中的節點。如果這些值在集羣中不唯一,可能會導致安裝失敗。

  

4、安裝Kubernetes 1.13.3

  
  Master 節點

規則 方向 端口範圍 作用 使用者
TCP Inbound 6443* Kubernetes API server All
TCP Inbound 2379-2380 etcd server client API kube-apiserver, etcd
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 10251 kube-scheduler Self
TCP Inbound 10252 kube-controller-manager Sel

  
  Worker 節點

規則 方向 端口範圍 作用 使用者
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 30000-32767 NodePort Services** All

  

4.1、安裝 kubeadm, kubelet 和 kubectl

  需要在每臺機器上都安裝以下的軟件包:

  • kubeadm: 用來初始化集羣的指令。
  • kubelet: 在集羣中的每個節點上用來啓動 pod 和 container 等。
  • kubectl: 用來與集羣通信的命令行工具。

  

4.1.1、替換阿里雲的源安裝kubernetes.repo

[root@c0 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

  

4.1.2、安裝 kubeadm1.13.3, kubelet1.13.3 和 kubectl1.13.3

  查看可用版本

[root@c0 ~]# yum list --showduplicates | grep 'kubeadm\|kubectl\|kubelet'

  
  安裝 kubeadm1.13.3, kubelet1.13.3 和 kubectl1.13.3

[root@c0 ~]# yum install -y kubelet-1.13.3 kubeadm-1.13.3 kubectl-1.13.3 --disableexcludes=kubernetes

  
  此時還不能啓動 kubelet,先設置開機啓動:

[root@c0 ~]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.

  

4.1.3、修改 kubelet 配置文件

  查看 kubelet 安裝了哪些文件?

[root@c0 ~]# rpm -ql kubelet
/etc/kubernetes/manifests               # 清單目錄
/etc/sysconfig/kubelet                  # 配置文件
/etc/systemd/system/kubelet.service     # unit file
/usr/bin/kubelet                        # 主程序

  
  修改 kubelet 配置文件

[root@c0 ~]# sed -i "s/KUBELET_EXTRA_ARGS=/KUBELET_EXTRA_ARGS=\"--fail-swap-on=false\"/" /etc/sysconfig/kubelet
[root@c0 ~]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"

  

4.2、初始化 Master 節點

  如果是第一次運行,下載 Docker 鏡像後再運行 kubeadm init會比較慢,也可以通過 kubeadm config images pull 命令先將鏡像下載到本地。
  kubeadm init 首先會執行一系列的運行前檢查來確保機器滿足運行 Kubernetes 的條件。
  這些檢查會拋出警告並在發現錯誤的時候終止整個初始化進程。 然後 kubeadm init 會下載並安裝集羣的控制面組件,這可能會花費幾分鐘時間
命令執行完以後,會自動啓動 kubelet Docker 鏡像

[root@c0 ~]# kubeadm init --kubernetes-version=v1.13.3 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.13.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [c0 localhost] and IPs [10.0.0.100 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [c0 localhost] and IPs [10.0.0.100 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [c0 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.504487 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "c0" as an annotation
[mark-control-plane] Marking the node c0 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node c0 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: m4f2wo.ich4mi5dj85z24pz
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 10.0.0.100:6443 --token m4f2wo.ich4mi5dj85z24pz --discovery-token-ca-cert-hash sha256:dd7a5193aeabee6fe723984f557d121a074aa4e40cdd3d701741d585a3a2f43c

請備份好 kubeadm init 輸出中的 kubeadm join 命令,因爲您會需要這個命令來給集羣添加節點。

  
  如果需要讓普通用戶可以運行 kubectl,請運行如下命令,其實這也是 kubeadm init 輸出的一部分:

[root@c0 ~]# mkdir -p $HOME/.kube
[root@c0 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@c0 ~]# chown $(id -u):$(id -g) $HOME/.kube/config

  
  使用 docker images 可以查看已經下載好的鏡像

[root@c0 ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-apiserver            v1.13.3             fe242e556a99        9 days ago          181MB
k8s.gcr.io/kube-controller-manager   v1.13.3             0482f6400933        9 days ago          146MB
k8s.gcr.io/kube-proxy                v1.13.3             98db19758ad4        9 days ago          80.3MB
k8s.gcr.io/kube-scheduler            v1.13.3             3a6f709e97a0        9 days ago          79.6MB
k8s.gcr.io/coredns                   1.2.6               f59dcacceff4        3 months ago        40MB
k8s.gcr.io/etcd                      3.2.24              3cab8e1b9802        4 months ago        220MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        13 months ago       742kB

  
  使用 docker ps 命令,可以看到在運行的 Docker 容器

[root@c0 ~]# docker ps
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS               NAMES
a3807d518520        98db19758ad4           "/usr/local/bin/kube…"   3 minutes ago       Up 3 minutes                            k8s_kube-proxy_kube-proxy-gg5xd_kube-system_81300c8f-2e0b-11e9-acd0-001c42508c6a_0
49af1ad74d31        k8s.gcr.io/pause:3.1   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_kube-proxy-gg5xd_kube-system_81300c8f-2e0b-11e9-acd0-001c42508c6a_0
8b4a7e0e0e9e        3a6f709e97a0           "kube-scheduler --ad…"   3 minutes ago       Up 3 minutes                            k8s_kube-scheduler_kube-scheduler-c0_kube-system_b734fcc86501dde5579ce80285c0bf0c_0
099c14b0ea76        3cab8e1b9802           "etcd --advertise-cl…"   3 minutes ago       Up 3 minutes                            k8s_etcd_etcd-c0_kube-system_bb7da2b04eb464afdde00da66617b2fc_0
425196638f87        fe242e556a99           "kube-apiserver --au…"   3 minutes ago       Up 3 minutes                            k8s_kube-apiserver_kube-apiserver-c0_kube-system_a6ec524e7fe1ac12a93850d3faff1d19_0
86e53f9cd1b0        0482f6400933           "kube-controller-man…"   3 minutes ago       Up 3 minutes                            k8s_kube-controller-manager_kube-controller-manager-c0_kube-system_844e381a44322ac23d6f33196cc0751c_0
d0c5544ec9c3        k8s.gcr.io/pause:3.1   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_kube-scheduler-c0_kube-system_b734fcc86501dde5579ce80285c0bf0c_0
31161f991a5f        k8s.gcr.io/pause:3.1   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_kube-controller-manager-c0_kube-system_844e381a44322ac23d6f33196cc0751c_0
11246ac9c5c4        k8s.gcr.io/pause:3.1   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_kube-apiserver-c0_kube-system_a6ec524e7fe1ac12a93850d3faff1d19_0
320b61f9d9c4        k8s.gcr.io/pause:3.1   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_etcd-c0_kube-system_bb7da2b04eb464afdde00da66617b2fc_0

  
  查看節點狀態

[root@c0 ~]# kubectl get cs,node
NAME                                 STATUS    MESSAGE              ERROR
componentstatus/controller-manager   Healthy   ok
componentstatus/scheduler            Healthy   ok
componentstatus/etcd-0               Healthy   {"health": "true"}

NAME      STATUS     ROLES    AGE   VERSION
node/c0   NotReady   master   75m   v1.13.3

此時節點的狀態爲NotReady,部署好 Flannel後,會變更 爲Ready

  

4.2.1、部署 Flannel

  創建 /home/work/_src/kube-flannel.yml 文件並保存,內容如下:

[root@c0 ~]# cat /home/work/_src/kube-flannel.yml
---
apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
    - configMap
    - secret
    - emptyDir
    - hostPath
  allowedHostPaths:
    - pathPrefix: "/etc/cni/net.d"
    - pathPrefix: "/etc/kube-flannel"
    - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unsed in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups: ['extensions']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['psp.flannel.unprivileged']
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: amd64
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: arm64
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-arm64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-arm64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: arm
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-arm
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-arm
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-ppc64le
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: ppc64le
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-ppc64le
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-ppc64le
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-s390x
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: s390x
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-s390x
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-s390x
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg

  
  啓動 Flannel 服務

[root@c0 ~]# kubectl apply -f kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

  
  查看節點狀態

[root@c0 ~]# kubectl get cs,node
NAME                                 STATUS    MESSAGE              ERROR
componentstatus/controller-manager   Healthy   ok
componentstatus/scheduler            Healthy   ok
componentstatus/etcd-0               Healthy   {"health": "true"}

NAME      STATUS   ROLES    AGE   VERSION
node/c0   Ready    master   80m   v1.13.3

此時 c0STATUS 已經是 Ready

  

4.3、設置 Node 節點加入集羣

  將新節點添加到集羣爲每一臺機器上執行以下操作:

kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

  
  如果忘記了 MasterToken,可以在 Master 上輸入以下命令查看:

[root@c0 ~]# kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
m4f2wo.ich4mi5dj85z24pz   22h       2019-02-12T22:44:01+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

  
  默認情況下 Token 過期是時間是24小時,如果 Token 過期以後,可以輸入以下命令,生成新的 Token

kubeadm token create

  
  ——discovery-token-ca-cert-hash 的查看方法,在 Master 運行以下命令查看

[root@c0 ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
dd7a5193aeabee6fe723984f557d121a074aa4e40cdd3d701741d585a3a2f43c

  
  接下來我們開始正式將 Node 節點加入到 Master ,輸入以下命令

[root@c1 ~]# kubeadm join 10.0.0.100:6443 --token m4f2wo.ich4mi5dj85z24pz --discovery-token-ca-cert-hash sha256:dd7a5193aeabee6fe723984f557d121a074aa4e40cdd3d701741d585a3a2f43c
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "10.0.0.100:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.0.0.100:6443"
[discovery] Requesting info from "https://10.0.0.100:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.0.0.100:6443"
[discovery] Successfully established connection with API Server "10.0.0.100:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "c1" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

  
  在 Master 查看節點加入情況,其他節點加入以後:

[root@c0 ~]# kubectl get nodes
NAME   STATUS   ROLES    AGE     VERSION
c0     Ready    master   3h51m   v1.13.3
c1     Ready    <none>   3h48m   v1.13.3
c2     Ready    <none>   2m20s   v1.13.3
c3     Ready    <none>   83s     v1.13.3

  
  在 Node 節點上查看 Docker 容器運行狀態

[root@c1 ~]# docker ps
CONTAINER ID        IMAGE                  COMMAND                  CREATED              STATUS              PORTS               NAMES
15536bfa9396        ff281650a721           "/opt/bin/flanneld -…"   About a minute ago   Up About a minute                       k8s_kube-flannel_kube-flannel-ds-amd64-ql2p2_kube-system_93dcecd5-2e1c-11e9-bd82-001c42508c6a_0
668e864b541f        98db19758ad4           "/usr/local/bin/kube…"   About a minute ago   Up About a minute                       k8s_kube-proxy_kube-proxy-fz9xp_kube-system_93dd3109-2e1c-11e9-bd82-001c42508c6a_0
34465abc64c7        k8s.gcr.io/pause:3.1   "/pause"                 About a minute ago   Up About a minute                       k8s_POD_kube-flannel-ds-amd64-ql2p2_kube-system_93dcecd5-2e1c-11e9-bd82-001c42508c6a_0
38e8facd94ad        k8s.gcr.io/pause:3.1   "/pause"                 About a minute ago   Up About a minute                       k8s_POD_kube-proxy-fz9xp_kube-system_93dd3109-2e1c-11e9-bd82-001c42508c6a_0

  
  最後在 Master 節點上查看 Pod 運行狀態,可以的看到 kube-flannelkube-flannel 在每一個 Node 節點上都有運行

[root@c0 ~]# kubectl get pods -n kube-system -o wide
NAME                          READY   STATUS    RESTARTS   AGE     IP           NODE   NOMINATED NODE   READINESS GATES
coredns-86c58d9df4-cl8bd      1/1     Running   0          3h51m   10.172.0.6   c0     <none>           <none>
coredns-86c58d9df4-ctgpv      1/1     Running   0          3h51m   10.172.0.7   c0     <none>           <none>
etcd-c0                       1/1     Running   0          3h50m   10.0.0.100   c0     <none>           <none>
kube-apiserver-c0             1/1     Running   0          3h50m   10.0.0.100   c0     <none>           <none>
kube-controller-manager-c0    1/1     Running   0          3h50m   10.0.0.100   c0     <none>           <none>
kube-flannel-ds-amd64-6m2sx   1/1     Running   0          107s    10.0.0.103   c3     <none>           <none>
kube-flannel-ds-amd64-78vsg   1/1     Running   0          2m44s   10.0.0.102   c2     <none>           <none>
kube-flannel-ds-amd64-8df6l   1/1     Running   0          3h49m   10.0.0.100   c0     <none>           <none>
kube-flannel-ds-amd64-ql2p2   1/1     Running   0          3h49m   10.0.0.101   c1     <none>           <none>
kube-proxy-6wmf7              1/1     Running   0          2m44s   10.0.0.102   c2     <none>           <none>
kube-proxy-7ggm8              1/1     Running   0          107s    10.0.0.103   c3     <none>           <none>
kube-proxy-b247j              1/1     Running   0          3h51m   10.0.0.100   c0     <none>           <none>
kube-proxy-fz9xp              1/1     Running   0          3h49m   10.0.0.101   c1     <none>           <none>
kube-scheduler-c0             1/1     Running   0          3h50m   10.0.0.100   c0     <none>           <none>

  

4.4、從集羣中刪除 Node

  可以運行下面的命令刪除 Node

kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>

  
  在 Node 被刪除,需要重啓所有 kubeadm 安裝狀態:

kubeadm reset

  

5、在 K8s 上部署一個 Whoami

  whoami 是一個簡單的HTTP docker服務,用於打印容器ID
  

5.1、在 Master 運行部署 Whoami

[root@c0 _src]# kubectl create deployment whoami --image=idoall/whoami
deployment.apps/whoami created

  

5.2、查看 Whoami 部署狀態

  通過下面的命令,查看所有的部署情況

[root@c0 ~]# kubectl get deployments
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
whoami   1/1     1            1           2m56s

  
  查看 Whoami 的部署信息

[root@c0 ~]# kubectl describe deployment whoami

  
  查看 Whoami 的容器日誌

[root@c0 ~]# kubectl describe po whoami
Name:               whoami-7c846b698d-8qdrp
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               c1/10.0.0.101
Start Time:         Tue, 12 Feb 2019 00:18:06 +0800
Labels:             app=whoami
                    pod-template-hash=7c846b698d
Annotations:        <none>
Status:             Running
IP:                 10.244.1.2
Controlled By:      ReplicaSet/whoami-7c846b698d
Containers:
  whoami:
    Container ID:   docker://89836e848175edb747bf590acc51c1cf8825640a7c212b6dfd22a77ab805829a
    Image:          idoall/whoami
    Image ID:       docker-pullable://idoall/whoami@sha256:6e79f7182eab032c812f6dafdaf55095409acd64d98a825c8e4b95e173e198f2
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Tue, 12 Feb 2019 00:18:18 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-xxx7l (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  default-token-xxx7l:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-xxx7l
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age        From               Message
  ----    ------     ----       ----               -------
  Normal  Scheduled  3m59s      default-scheduler  Successfully assigned default/whoami-7c846b698d-8qdrp to c1
  Normal  Pulling    <invalid>  kubelet, c1        pulling image "idoall/whoami"
  Normal  Pulled     <invalid>  kubelet, c1        Successfully pulled image "idoall/whoami"
  Normal  Created    <invalid>  kubelet, c1        Created container
  Normal  Started    <invalid>  kubelet, c1        Started container

  

5.3、爲 Whoami 擴展端口

  創建一個可以通過互聯網訪問的 Whoami 容器

[root@c0 ~]# kubectl create service nodeport whoami --tcp=80:80
service/whoami created

上面的命令將在主機上爲 Whoami 部署創建面向公衆的服務。
由於這是一個節點端口部署,因此 kubernetes 會將此服務分配給32000+範圍內的主機上的端口。

  
  查看當前的服務狀態

[root@c0 ~]# kubectl get svc,pods -o wide
NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE   SELECTOR
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        18m   <none>
service/whoami       NodePort    10.102.196.38   <none>        80:32707/TCP   36s   app=whoami

NAME                          READY   STATUS    RESTARTS   AGE     IP           NODE   NOMINATED NODE   READINESS GATES
pod/whoami-7c846b698d-8qdrp   1/1     Running   0          5m25s   10.244.1.2   c1     <none>           <none>

上面的服務可以看到 Whoami 運行在 32707 端口,通過 http://10.0.0.101:32707 訪問

  

5.4、測試 Whoami 服務是否運行正常

[root@c0 ~]# curl c1:32707
[mshk.top]I'm whoami-7c846b698d-8qdrp

  

5.5、擴展部署應用

kubectl scale --replicas=5 deployment/whoami
deployment.extensions/whoami scaled

  
  查看擴展後的結果,可以看到 Whoamic1c2c3上面都有部署

[root@c0 ~]# kubectl get svc,pods -o wide
NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE     SELECTOR
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        25m     <none>
service/whoami       NodePort    10.102.196.38   <none>        80:32707/TCP   7m26s   app=whoami

NAME                          READY   STATUS    RESTARTS   AGE   IP           NODE   NOMINATED NODE   READINESS GATES
pod/whoami-7c846b698d-8qdrp   1/1     Running   0          12m   10.244.1.2   c1     <none>           <none>
pod/whoami-7c846b698d-9rzlh   1/1     Running   0          58s   10.244.2.2   c2     <none>           <none>
pod/whoami-7c846b698d-b6h9p   1/1     Running   0          58s   10.244.1.3   c1     <none>           <none>
pod/whoami-7c846b698d-lphdg   1/1     Running   0          58s   10.244.2.3   c2     <none>           <none>
pod/whoami-7c846b698d-t7nsk   1/1     Running   0          58s   10.244.3.2   c3     <none>           <none>

  
  測試擴展後的結果

[root@c0 ~]# curl c0:32707
[mshk.top]I'm whoami-7c846b698d-8qdrp
[root@c0 ~]# curl c0:32707
[mshk.top]I'm whoami-7c846b698d-8qdrp
[root@c0 ~]# curl c0:32707
[mshk.top]I'm whoami-7c846b698d-t7nsk
[root@c0 ~]# curl c0:32707
[mshk.top]I'm whoami-7c846b698d-8qdrp
[root@c0 ~]# curl c0:32707
[mshk.top]I'm whoami-7c846b698d-lphdg
[root@c0 ~]# curl c0:32707
[mshk.top]I'm whoami-7c846b698d-b6h9p

ClusterIP 模式會提供一個集羣內部的虛擬IP(與Pod不在同一網段),以供集羣內部的 Pod 之間通信使用。

  

5.6、刪除 Whoami 部署

[root@c0 ~]# kubectl delete deployment whoami
deployment.extensions "whoami" deleted
[root@c0 ~]# kubectl get deployments
No resources found.

  

6、部署 Kubernetes Web UI (Dashboard)

  從版本1.7開始,儀表板不再具有默認授予的完全管理員權限。所有權限都被撤銷,並且只授予了使 Dashboard 工作所需的最小權限。
  

6.1、通過配置文件部署

  我們使用官方提供的 v1.10.1 版本的配置文件
  創建並保存文件名/home/work/_src/kubernetes-dashboard.yaml,文件的內容如下:

[root@c0 _src]# cat /home/work/_src/kubernetes-dashboard.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# ------------------- Dashboard Secret ------------------- #

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kube-system
type: Opaque

---
# ------------------- Dashboard Service Account ------------------- #

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Role & Role Binding ------------------- #

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
rules:
  # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["create"]
  # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["create"]
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  verbs: ["get", "update", "delete"]
  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["kubernetes-dashboard-settings"]
  verbs: ["get", "update"]
  # Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
  resources: ["services"]
  resourceNames: ["heapster"]
  verbs: ["proxy"]
- apiGroups: [""]
  resources: ["services/proxy"]
  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Deployment ------------------- #

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
      - name: kubernetes-dashboard
        image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          - --auto-generate-certificates
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          # - --apiserver-host=http://my-address:port
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
          # Create on-disk volume to store exec logs
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule

---
# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

  創建 Dashboard 服務

[root@c0 _src]# kubectl apply -f kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created

  

6.2、修改配置文件的服務類型爲NodePort

  輸入以下命令,可以查看服務的yml信息,將type: ClusterIP替換成type: NodePort,然後保存。

[root@c0 _src]# kubectl -n kube-system edit service kubernetes-dashboard
service/kubernetes-dashboard edited

  
  查看yml信息,看到格式類似下面:

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
...
  name: kubernetes-dashboard
  namespace: kube-system
  resourceVersion: "343478"
  selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard-head
  uid: 8e48f478-993d-11e7-87e0-901b0e532516
spec:
  clusterIP: 10.100.124.90
  externalTrafficPolicy: Cluster
  ports:
  - port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  sessionAffinity: None
#  type: ClusterIP
# 修改爲NodePort對外提供服務
  type: NodePort
status:
  loadBalancer: {}

NodePort 模式下 Kubernetes 將會在每個 Node 上打開一個端口並且每個 Node 的端口都是一樣的,通過 <NodeIP>:NodePort 的方式 Kubernetes 集羣外部的程序可以訪問 Service。

  
  通過下面的命令,可以查看到,服務已在服務器的端口31230(HTTPS)上公開。現在,您可以從以下瀏覽器訪問它:https://10.0.0.100:30779

[root@c0 ~]# kubectl -n kube-system get service kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.101.41.130   <none>        443:30779/TCP   44s

  
  查看 Dashboard 狀態

[root@c0 ~]# kubectl get pods --all-namespaces | grep kubernetes-dashboard
kube-system   kubernetes-dashboard-57df4db6b-6scvx   1/1     Running   0          4m9s

  
  查看 Dashboard 日誌

[root@c0 ~]# kubectl logs -f kubernetes-dashboard-57df4db6b-6scvx -n kube-system
2019/02/11 16:10:15 Starting overwatch
2019/02/11 16:10:15 Using in-cluster config to connect to apiserver
2019/02/11 16:10:15 Using service account token for csrf signing
2019/02/11 16:10:15 Successful initial request to the apiserver, version: v1.13.3
2019/02/11 16:10:15 Generating JWE encryption key
2019/02/11 16:10:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2019/02/11 16:10:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2019/02/11 16:10:15 Storing encryption key in a secret
2019/02/11 16:10:15 Creating in-cluster Heapster client
2019/02/11 16:10:15 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/02/11 16:10:15 Auto-generating certificates
2019/02/11 16:10:15 Successfully created certificates
2019/02/11 16:10:15 Serving securely on HTTPS port: 8443
.....

  

6.3、創建訪問 Dashboard Token

  需要創建一個 Admin 用戶並授予 Admin 角色綁定,使用下面的 yaml文件 創建 admin 用戶並賦予管理員權限,然後可以通過 Token 訪問 kubernetes
  您可以通過創建以下 ClusterRoleBinding 來授予 Dashboard 服務 Admin 管理員權限。根據下面的提示生成 /home/work/_src/kubernetes-dashboard-admin.yaml
  使用kubectl create -f /home/work/_src/kubernetes-dashboard-admin.yaml進行部署。

[root@c0 ~]# cat /home/work/_src/kubernetes-dashboard-admin.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system
[root@c0 _src]# kubectl create -f kubernetes-dashboard-admin.yaml
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created

  
  創建完成後獲取 secrettoken 的值。

[root@c0 _src]# kubectl get secret -o wide --all-namespaces | grep kubernetes-dashboard-token
kube-system   kubernetes-dashboard-token-fbl6l                 kubernetes.io/service-account-token   3      3h20m
[root@c0 _src]# kubectl -n kube-system describe secret kubernetes-dashboard-token-fbl6l
Name:         kubernetes-dashboard-token-fbl6l
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard
              kubernetes.io/service-account.uid: 091b4de4-2e05-11e9-8e1f-001c42508c6a

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1mYmw2bCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjA5MWI0ZGU0LTJlMDUtMTFlOS04ZTFmLTAwMWM0MjUwOGM2YSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.LUjBR3xdsB0foba63228UEZiG2DoYmk5s84fQt1FXRkC4PoEMAkVW0hrrCIGeSlwLGFujY4w9SkYyex4shMFZaZgKKvu_lrx2qHXZSmGGq7sqH7h0K-3ZrCgXSc4_eEIz2VyNE6SBV6VxU0F-sYzv6WR6v2Z8uudszD5GULsHsNK3xcSjaoyf468_wD9Es0lzpZUXWAl87o-L-a4SehU47xNQ2cCWQyinQl5NdDaySCprQ4QUn5xYa71JK7ZTwWD3qiNAQWH4F64f5xI1RaG854J-ycjZ3xJcWsVCeMiZrjATGi9Y0jaZu356uQ-AkVWGWZ2ERm_zOfPElZd0SssFg

上面的token 就是登錄用的密碼

  
  也可以通過 jsonpath 直接獲取 token

[root@c0 _src]# kubectl -n kube-system get secret kubernetes-dashboard-token-fbl6l -o jsonpath={.data.token}|base64 -d
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1mYmw2bCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjA5MWI0ZGU0LTJlMDUtMTFlOS04ZTFmLTAwMWM0MjUwOGM2YSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.LUjBR3xdsB0foba63228UEZiG2DoYmk5s84fQt1FXRkC4PoEMAkVW0hrrCIGeSlwLGFujY4w9SkYyex4shMFZaZgKKvu_lrx2qHXZSmGGq7sqH7h0K-3ZrCgXSc4_eEIz2VyNE6SBV6VxU0F-sYzv6WR6v2Z8uudszD5GULsHsNK3xcSjaoyf468_wD9Es0lzpZUXWAl87o-L-a4SehU47xNQ2cCWQyinQl5NdDaySCprQ4QUn5xYa71JK7ZTwWD3qiNAQWH4F64f5xI1RaG854J-ycjZ3xJcWsVCeMiZrjATGi9Y0jaZu356uQ-AkVWGWZ2ERm_zOfPElZd0SssFg

  
  也可以使用下面的命令,直接獲取 kubernetes-dashboard-token 的值,然後直接打印輸出

[root@c0 _src]# k8tokenvalue=`kubectl get secret -o wide --all-namespaces | grep kubernetes-dashboard-token | awk '{print $2}'`;kubectl -n kube-system get secret $k8tokenvalue -o jsonpath={.data.token}|base64 -d | awk '{print $1}'
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1mYmw2bCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjA5MWI0ZGU0LTJlMDUtMTFlOS04ZTFmLTAwMWM0MjUwOGM2YSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.LUjBR3xdsB0foba63228UEZiG2DoYmk5s84fQt1FXRkC4PoEMAkVW0hrrCIGeSlwLGFujY4w9SkYyex4shMFZaZgKKvu_lrx2qHXZSmGGq7sqH7h0K-3ZrCgXSc4_eEIz2VyNE6SBV6VxU0F-sYzv6WR6v2Z8uudszD5GULsHsNK3xcSjaoyf468_wD9Es0lzpZUXWAl87o-L-a4SehU47xNQ2cCWQyinQl5NdDaySCprQ4QUn5xYa71JK7ZTwWD3qiNAQWH4F64f5xI1RaG854J-ycjZ3xJcWsVCeMiZrjATGi9Y0jaZu356uQ-AkVWGWZ2ERm_zOfPElZd0SssFg

  

6.4、通過 Token 訪問 Kubernetes Web UI (Dashboard)

  如下圖中選擇令牌,輸入上面的 Token 信息,點擊登錄,登錄以後就會看到如下的界面:

  

6.5、刪除 Kubernetes Web UI (Dashboard) 服務

[root@c0 ~]# kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
secret "kubernetes-dashboard-certs" deleted
serviceaccount "kubernetes-dashboard" deleted
role.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" deleted
rolebinding.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" deleted
deployment.apps "kubernetes-dashboard" deleted
service "kubernetes-dashboard" deleted

  

7、部署 Heapster 組件

  Heapster 用於計算並分析集羣資源利用率、監控集羣容器
  

7.1、下載官方提供的 yml 文件

[root@c0 _src]# pwd
/home/work/_src
[root@c0 _src]# wget https://github.com/kubernetes-retired/heapster/archive/v1.5.3.tar.gz
--2019-02-11 23:46:53--  https://github.com/kubernetes-retired/heapster/archive/v1.5.3.tar.gz
Resolving github.com (github.com)... 192.30.253.113, 192.30.253.112
Connecting to github.com (github.com)|192.30.253.113|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://codeload.github.com/kubernetes-retired/heapster/tar.gz/v1.5.3 [following]
--2019-02-11 23:46:55--  https://codeload.github.com/kubernetes-retired/heapster/tar.gz/v1.5.3
Resolving codeload.github.com (codeload.github.com)... 192.30.255.121, 192.30.255.120
Connecting to codeload.github.com (codeload.github.com)|192.30.255.121|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [application/x-gzip]
Saving to: ‘v1.5.3.tar.gz’

    [         <=>                                                                                                                                                                  ] 4,898,117   2.52MB/s   in 1.9s

2019-02-11 23:47:00 (2.52 MB/s) - ‘v1.5.3.tar.gz’ saved [4898117]
[root@c0 _src]# tar -xvf v1.5.3.tar.gz

  
  將裏面的鏡像源替換成阿里雲

[root@c0 _src]# cd heapster-1.5.3/deploy/kube-config/influxdb/
[root@c0 influxdb]# sed -i "s/k8s.gcr.io/registry.cn-hangzhou.aliyuncs.com\/google_containers/" grafana.yaml
[root@c0 influxdb]# sed -i "s/k8s.gcr.io/registry.cn-hangzhou.aliyuncs.com\/google_containers/" heapster.yaml
[root@c0 influxdb]# sed -i "s/k8s.gcr.io/registry.cn-hangzhou.aliyuncs.com\/google_containers/" influxdb.yaml

  

7.2、部署 Heapster

[root@c0 influxdb]# ls
grafana.yaml  heapster-rbac.yaml  heapster.yaml  influxdb.yaml
[root@c0 influxdb]# ls
grafana.yaml  heapster.yaml  influxdb.yaml
[root@c0 influxdb]# kubectl create -f .
deployment.extensions/monitoring-grafana created
service/monitoring-grafana created
serviceaccount/heapster created
deployment.extensions/heapster created
service/heapster created
deployment.extensions/monitoring-influxdb created
service/monitoring-influxdb created

  等幾分鐘以後,再次訪問 Kubernetes Web UI (Dashboard) ,就可以看到容器組一欄,多了CPU和內存信息。
  

8、常見問題

8.1、用虛擬機如何生成新的網卡UUID?

  例如我是在Parallels上安裝的一個c1,克隆c2後,根據本文上面的內容可以更改IP,UUID如果要更改,可以使用以下命令查看網卡的UUID:

[root@c2 ~]# uuidgen eth0
6ea1a665-0126-456c-80c7-1f69f32e83b7

  

8.2、kubeadm是一項正在進行中的工作,目前還有一些不完善的地方

  現在創建的集羣只有一個 Master,在單個 ETCD 數據庫上運行,這意味着如果 Master down掉,集羣將會丟失。可以添加HA負載支持多個ETCD服務器
  暫的解決辦法是,定期備份 ETCD ,目錄在 /var/lib/etcd
  

8.3、kubeadm init 初始化以後,Master不會參與負載工作

  出於安全原因,您的羣集不會在主服務器上安排pod。如果您希望能夠在主服務器上安排pod,例如對於用於開發的單機Kubernetes集羣,可以運行以下命令

# 所有主服務器都安排 pod
kubectl taint nodes --all node-role.kubernetes.io/master-
# 指定 c0 服務器都安排 pod
kubectl taint nodes c0 node-role.kubernetes.io/master-

  

9、參考文章

  使用 kubeadm 創建一個單主集羣
  Pod調度到Master節點
  dashboard
  Access control


博文作者:迦壹
博客地址:Centos7 使用 kubeadm 安裝Kubernetes 1.13.3
轉載聲明:可以轉載, 但必須以超鏈接形式標明文章原始出處和作者信息及版權聲明,謝謝合作!
  
假設您認爲這篇文章對您有幫助,可以通過以下方式進行捐贈,謝謝!

比特幣地址:1KdgydfKMcFVpicj5w4vyn3T88dwjBst6Y
以太坊地址:0xbB0a92d634D7b9Ac69079ed0e521CC2e0a97c420


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章