Kubernetes:
Kubernetes是由Google 2014年創建管理的,是Google10多年大規模容器管理技術Borg的開源版本。
Kubernetes是容器集羣管理系統,是一個開源的平臺,可以實現容器集羣的自動化部署,動態伸縮,維護等功能。
優點:
- 自動化部署和複製容器
- 無縫對接新的應用功能。
- 節省資源,優化硬件資源的使用。
- 將容器組成組,並且支持容器間的負載均衡。
特性:
- 可移植:支持公有云,私有云,混合雲,多重雲。
- 可擴展:模塊化,插件化,可掛載,可組合。
- 自動化:自動部署,自動重啓,自動複製,自動伸縮/擴展。
安裝:
repodata:https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
Host | 網絡信息 |
---|---|
k8smaster | ens33:192.168.43.45 |
k8snode1 | ens33:192.168.43.136 |
k8snode2 | ens33:192.168.43.176 |
k8smatser:
[root@localhost ~]# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.43.45 k8smaster
192.168.43.136 k8snode1
192.168.43.176 k8snode2
[root@localhost ~]# hostname k8smaster
[root@localhost ~]# bash
[root@k8smaster ~]# scp /etc/hosts root@k8snode1:/etc/
The authenticity of host 'k8snode1 (192.168.43.136)' can't be established.
ECDSA key fingerprint is 47:04:b9:ed:39:e9:58:4d:2b:b3:39:76:ad:7a:c2:d3.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'k8snode1,192.168.43.136' (ECDSA) to the list of known hosts.
hosts 100% 230 0.2KB/s 00:00
[root@k8smaster ~]# scp /etc/hosts root@k8snode2:/etc/
The authenticity of host 'k8snode2 (192.168.43.176)' can't be established.
ECDSA key fingerprint is fc:4d:22:b7:b2:79:9f:da:21:1a:57:c5:59:3d:27:63.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'k8snode2,192.168.43.176' (ECDSA) to the list of known hosts.
hosts 100% 230 0.2KB/s 00:00
[root@k8smaster ~]# systemctl disable docker.service
k8snode1:
[root@localhost ~]# hostname k8snode1
[root@localhost ~]# bash
[root@k8snode1 ~]# systemctl disable docker.service
k8snode2:
[root@localhost ~]# hostname k8snode2
[root@localhost ~]# bash
[root@k8snode2 ~]# systemctl disable docker.service
k8smatser:
[root@k8smaster ~]# cd /etc/yum.repos.d/
[root@k8smaster yum.repos.d]# ls
Ali-docker.repo CentOS-Debuginfo.repo CentOS-Sources.repo
CentOS-Base.repo CentOS-fasttrack.repo CentOS-Vault.repo
CentOS-CR.repo CentOS-Media.repo
[root@k8smaster yum.repos.d]# vim Ali-k8s.repo
[aliyun.k8s]
name=aliyun.k8s
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
[root@k8smaster yum.repos.d]# scp /etc/yum.repos.d/Ali-k8s.repo root@k8snode1:/etc/yum.repos.d/
Ali-k8s.repo 100% 129 0.1KB/s 00:00
[root@k8smaster yum.repos.d]# scp /etc/yum.repos.d/Ali-k8s.repo root@k8snode2:/etc/yum.repos.d/
Ali-k8s.repo 100% 129 0.1KB/s 00:00
[root@k8smaster yum.repos.d]# yum -y install kubelet kubeadm kubectl --disableexcludes=kubernetes
[root@k8smaster yum.repos.d]# uname -r
3.10.0-514.el7.x86_64
[root@k8smaster yum.repos.d]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
[root@k8smaster yum.repos.d]# yum -y install https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
[root@k8smaster yum.repos.d]# yum --enablerepo="elrepo-kernel" -y install kernel-ml
k8snode1:
[root@k8snode1 ~]# yum -y install kubelet kubeadm kubectl --disableexcludes=kubernetes
[root@k8snode1 ~]# uname -r
3.10.0-514.el7.x86_64
[root@k8snode1 ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
[root@k8snode1 ~]# yum -y install https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
[root@k8snode1 ~]# yum --enablerepo="elrepo-kernel" -y install kernel-ml
k8snode2:
[root@k8snode2 ~]# yum -y install kubelet kubeadm kubectl --disableexcludes=kubernetes
[root@k8snode2 ~]# uname -r
3.10.0-514.el7.x86_64
[root@k8snode2 ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
[root@k8snode2 ~]# yum -y install https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
[root@k8snode2 ~]# yum --enablerepo="elrepo-kernel" -y install kernel-ml
k8smatser:
[root@k8smaster ~]# grub2-set-default 0
[root@k8smaster ~]# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.3.7-1.el7.elrepo.x86_64
Found initrd image: /boot/initramfs-5.3.7-1.el7.elrepo.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-514.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-514.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-9b70c754b94d41b1835e5ce4329a0aa9
Found initrd image: /boot/initramfs-0-rescue-9b70c754b94d41b1835e5ce4329a0aa9.img
done
k8snode1:
[root@k8snode1 ~]# grub2-set-default 0
[root@k8snode2 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.3.7-1.el7.elrepo.x86_64
Found initrd image: /boot/initramfs-5.3.7-1.el7.elrepo.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-514.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-514.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-9b70c754b94d41b1835e5ce4329a0aa9
Found initrd image: /boot/initramfs-0-rescue-9b70c754b94d41b1835e5ce4329a0aa9.img
done
k8snode2:
[root@k8snode2 ~]# grub2-set-default 0
[root@k8snode2 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.3.7-1.el7.elrepo.x86_64
Found initrd image: /boot/initramfs-5.3.7-1.el7.elrepo.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-514.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-514.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-9b70c754b94d41b1835e5ce4329a0aa9
Found initrd image: /boot/initramfs-0-rescue-9b70c754b94d41b1835e5ce4329a0aa9.img
done
k8smatser,k8snode1,k8snode2:
reboot
k8smatser,k8snode1,k8snode2:
uname -r
5.3.7-1.el7.elrepo.x86_64
要求:
- 主機的MAC地址不能重複。
- 主機的UUID不能重複。
- 必須關閉swap分區。
- 放行防火牆端口。
檢查MAC地址:
[root@k8smaster ~]# cat /sys/class/net/ens33/address
00:0c:29:77:db:15
[root@k8snode1 ~]# cat /sys/class/net/ens33/address
00:0c:29:ce:43:5d
[root@k8snode2 ~]# cat /sys/class/net/ens33/address
00:0c:29:05:69:2d
檢查UUID:
[root@k8smaster ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33 | grep UUID
UUID="a2238b75-8325-4d64-b2a6-5dc4482269d2"
[root@k8snode1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33 | grep UUID
UUID="2375d46b-4058-4976-9152-1732b81cd838"
[root@k8snode2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33 | grep UUID
UUID="fec7ba34-c85a-4f92-9072-f605f0085f69"
關閉swap分區:
[root@k8smaster ~]# swapoff -a
[root@k8smaster ~]# cp /etc/fstab /etc/fstab_bak
[root@k8smaster ~]# vim /etc/fstab
# /dev/mapper/cl-swap swap swap defaults 0 0 #註釋這一行。
[root@k8smaster ~]# vim /etc/sysctl.conf
添加:
vm.swappiness = 0
[root@k8smaster ~]# sysctl -p
vm.swappiness = 0
[root@k8snode1 ~]# swapoff -a
[root@k8snode1 ~]# cp /etc/fstab /etc/fstab_bak
[root@k8snode1 ~]# vim /etc/fstab
# /dev/mapper/cl-swap swap swap defaults 0 0 #註釋這一行。
[root@k8snode1 ~]# vim /etc/sysctl.conf
添加:
vm.swappiness = 0
[root@k8snode1 ~]# sysctl -p
vm.swappiness = 0
[root@k8snode2 ~]# swapoff -a
[root@k8snode2 ~]# cp /etc/fstab /etc/fstab_bak
[root@k8snode2 ~]# vim /etc/fstab
# /dev/mapper/cl-swap swap swap defaults 0 0 #註釋這一行。
[root@k8snode2 ~]# vim /etc/sysctl.conf
添加:
vm.swappiness = 0
[root@k8snode2 ~]# sysctl -p
vm.swappiness = 0
放行防火牆:
[root@k8smaster ~]# systemctl disable firewalld.service && systemctl stop firewalld.service
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
[root@k8snode1 ~]# systemctl disable firewalld.service && systemctl stop firewalld.service
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
[root@k8snode2 ~]# systemctl disable firewalld.service && systemctl stop firewalld.service
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
[root@k8smaster ~]# vim /etc/sysctl.conf
添加:
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
[root@k8smaster ~]# sysctl -p
vm.swappiness = 0
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
[root@k8snode1 ~]# vim /etc/sysctl.conf
添加:
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
[root@k8snode1 ~]# sysctl -p
vm.swappiness = 0
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
[root@k8snode2 ~]# vim /etc/sysctl.conf
添加:
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
[root@k8snode2 ~]# sysctl -p
vm.swappiness = 0
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
設置docker開機自啓:
[root@k8smaster ~]# systemctl enable docker.service
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@k8snode1 ~]# systemctl enable docker.service
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@k8snode2 ~]# systemctl enable docker.service
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
關閉selinux:
[root@k8smaster ~]# systemctl start docker.service
[root@k8smaster ~]# setenforce 0
[root@k8snode1 ~]# systemctl start docker.service
[root@k8snode1 ~]# setenforce 0
[root@k8snode2 ~]# systemctl start docker.service
[root@k8snode2 ~]# setenforce 0
啓動k8s和設置k8s開機自啓:
[root@k8smaster ~]# systemctl enable kubelet.service && systemctl start kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@k8snode1 ~]# systemctl enable kubelet.service && systemctl start kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@k8snode2 ~]# systemctl enable kubelet.service && systemctl start kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
查看k8s版本所需要的組件包:
[root@k8smaster ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:15:39Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
[root@k8smaster ~]# kubeadm config images list --kubernetes-version=v1.16.2
k8s.gcr.io/kube-apiserver:v1.16.2
k8s.gcr.io/kube-controller-manager:v1.16.2
k8s.gcr.io/kube-scheduler:v1.16.2
k8s.gcr.io/kube-proxy:v1.16.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.15-0
k8s.gcr.io/coredns:1.6.2
搜索:
[root@k8smaster ~]# docker search mirrorgooglecontainers
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
mirrorgooglecontainers/kube-apiserver-amd64 30
mirrorgooglecontainers/kubernetes-dashboard-amd64 20
mirrorgooglecontainers/pause-amd64 17
mirrorgooglecontainers/metrics-server-amd64 14
mirrorgooglecontainers/kube-proxy 14
mirrorgooglecontainers/kube-apiserver 14
mirrorgooglecontainers/kube-scheduler 11
mirrorgooglecontainers/kube-controller-manager 10
mirrorgooglecontainers/kube-proxy-amd64 10
mirrorgooglecontainers/kube-controller-manager-amd64 9
mirrorgooglecontainers/k8s-dns-dnsmasq-nanny-amd64 7
mirrorgooglecontainers/k8s-dns-kube-dns-amd64 6
mirrorgooglecontainers/kube-scheduler-amd64 6
mirrorgooglecontainers/etcd-amd64 5
mirrorgooglecontainers/k8s-dns-sidecar-amd64 5
mirrorgooglecontainers/etcd 5
mirrorgooglecontainers/kube-apiserver-arm64 3
mirrorgooglecontainers/pause 3
mirrorgooglecontainers/kube-proxy-arm 3
mirrorgooglecontainers/kube-controller-manager-arm 3
mirrorgooglecontainers/heapster-amd64 2
mirrorgooglecontainers/kube-scheduler-arm 1
mirrorgooglecontainers/hyperkube 0
mirrorgooglecontainers/busybox 0
mirrorgooglecontainers/kube-scheduler-arm64 0
安裝組件k8smatser,k8snode1,k8snode2:
docker pull mirrorgooglecontainers/kube-apiserver:v1.16.2 && docker pull mirrorgooglecontainers/kube-controller-manager:v1.16.2 && docker pull mirrorgooglecontainers/kube-scheduler:v1.16.2 && docker pull mirrorgooglecontainers/kube-proxy:v1.16.2 && docker pull mirrorgooglecontainers/pause:3.1 && docker pull mirrorgooglecontainers/etcd:3.3.15-0 && docker pull coredns/coredns:1.6.2
安裝降級組件k8smatser,k8snode1,k8snode2(如果上面的報錯的話):
docker pull mirrorgooglecontainers/kube-controller-manager:v1.15.3 && docker pull mirrorgooglecontainers/kube-scheduler:v1.15.3 && docker pull mirrorgooglecontainers/kube-proxy:v1.15.3 && docker pull mirrorgooglecontainers/pause:3.1 && docker pull mirrorgooglecontainers/etcd:3.3.15-0 && docker pull coredns/coredns:1.6.2 && docker pull 243662875/kube-apiserver:v1.15.3
修改鏡像名字(k8smatser,k8snode1,k8snode2):
[root@k8smaster ~]# docker images | grep mirrorgooglecontainers | awk '{print "docker tag ",$1":"$2,$1":"$2}' | sed -e 's#mirrorgooglecontainers#k8s.gcr.io#2' | sh -x
+ docker tag mirrorgooglecontainers/etcd:3.3.15-0 k8s.gcr.io/etcd:3.3.15-0
+ docker tag mirrorgooglecontainers/kube-proxy:v1.15.3 k8s.gcr.io/kube-proxy:v1.15.3
+ docker tag mirrorgooglecontainers/kube-scheduler:v1.15.3 k8s.gcr.io/kube-scheduler:v1.15.3
+ docker tag mirrorgooglecontainers/kube-controller-manager:v1.15.3 k8s.gcr.io/kube-controller-manager:v1.15.3
+ docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
[root@k8smaster ~]# docker images | grep k8s.gcr.io
k8s.gcr.io/etcd 3.3.15-0 b2756210eeab 7 weeks ago 247MB
k8s.gcr.io/kube-proxy v1.15.3 232b5c793146 2 months ago 82.4MB
k8s.gcr.io/kube-controller-manager v1.15.3 e77c31de5547 2 months ago 159MB
[root@k8smaster ~]# docker tag k8s.gcr.io/kube-proxy:v1.15.3 k8s.gcr.io/kube-proxy:v1.16.2
[root@k8smaster ~]# docker tag k8s.gcr.io/kube-controller-manager:v1.15.3 k8s.gcr.io/kube-controller-manager:v1.16.2
[root@k8smaster ~]# docker tag k8s.gcr.io/kube-scheduler:v1.15.3 k8s.gcr.io/kube-scheduler:v1.16.2
[root@k8smaster ~]# docker tag 243662875/kube-apiserver:v1.15.3 k8s.gcr.io/kube-apiserver:v1.16.2
[root@k8smaster ~]# docker tag coredns/coredns:1.6.2 k8s.gcr.io/coredns:1.6.2
[root@k8smaster ~]# docker images | grep mirrorgooglecontainers | awk '{print "docker rmi ",$1":"$2}' | sh -x
+ docker rmi mirrorgooglecontainers/etcd:3.3.15-0
Untagged: mirrorgooglecontainers/etcd:3.3.15-0
Untagged: mirrorgooglecontainers/etcd@sha256:343b00e99aa52b8c715c3d460e0adea387fcb487549d949ecbf53347ab0d7191
+ docker rmi mirrorgooglecontainers/kube-proxy:v1.15.3
Untagged: mirrorgooglecontainers/kube-proxy:v1.15.3
Untagged: mirrorgooglecontainers/kube-proxy@sha256:a2568776eb7c104c2ad903eec6027fabd22aa8af1a1179d263760051b5ec5d68
+ docker rmi mirrorgooglecontainers/kube-scheduler:v1.15.3
Untagged: mirrorgooglecontainers/kube-scheduler:v1.15.3
Untagged: mirrorgooglecontainers/kube-scheduler@sha256:55b5e145b2ba29a8f5ca6d91248e6f35fe5dd24af70947edc7772377692e73f6
+ docker rmi mirrorgooglecontainers/kube-controller-manager:v1.15.3
Untagged: mirrorgooglecontainers/kube-controller-manager:v1.15.3
Untagged: mirrorgooglecontainers/kube-controller-manager@sha256:9d58961dbae0d4f38517e7d35a9d00278808e847c63262249611930aaf21bf18
+ docker rmi mirrorgooglecontainers/pause:3.1
Untagged: mirrorgooglecontainers/pause:3.1
Untagged: mirrorgooglecontainers/pause@sha256:2b59283ddaa8a7256ae724ab16971fdd48d75ae8203c677cd55400efb69539a1
[root@k8smaster ~]# docker images | grep v1.15.3 | awk '{print "docker rmi ",$1":"$2}' | sh -x
+ docker rmi 243662875/kube-apiserver:v1.15.3
Untagged: 243662875/kube-apiserver:v1.15.3
Untagged: 243662875/kube-apiserver@sha256:10e9069e75e9a8c72dee85663cdc11b3e13580ca2636bfbe496177118bf1ab5a
+ docker rmi k8s.gcr.io/kube-proxy:v1.15.3
Untagged: k8s.gcr.io/kube-proxy:v1.15.3
+ docker rmi k8s.gcr.io/kube-controller-manager:v1.15.3
Untagged: k8s.gcr.io/kube-controller-manager:v1.15.3
+ docker rmi k8s.gcr.io/kube-scheduler:v1.15.3
Untagged: k8s.gcr.io/kube-scheduler:v1.15.3
[root@k8smaster ~]# docker images | grep k8s.gcr.io
k8s.gcr.io/kube-apiserver v1.16.2 e9bb983939ca 4 weeks ago 207MB
k8s.gcr.io/etcd 3.3.15-0 b2756210eeab 7 weeks ago 247MB
k8s.gcr.io/kube-proxy v1.16.2 232b5c793146 2 months ago 82.4MB
k8s.gcr.io/kube-scheduler v1.16.2 703f9c69a5d5 2 months ago 81.1MB
k8s.gcr.io/kube-controller-manager v1.16.2 e77c31de5547 2 months ago 159MB
k8s.gcr.io/coredns 1.6.2 bf261d157914 2 months ago 44.1MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 22 months ago 742kB
配置flannel網絡(k8smatser,k8snode1,k8snode2):
[root@k8smaster ~]# docker pull quay.io/coreos/flannel:v0.11.0-arm64
[root@k8smaster ~]# mkdir -p /etc/cni/net.d
[root@k8smaster ~]# vim /etc/cni/net.d/10-flannel.conf
{"name":"cbr0","type":"flannel","delegate":{"isDefaultGateway":true}}
[root@k8smaster ~]# mkdir /usr/share/oci-umount/oci-umount.d -p
[root@k8smaster ~]# mkdir /run/flannel
[root@k8smaster ~]# vim /run/flannel/subnet.env
FLANNEL_NETWORK=172.100.0.0/16
FLANNEL_SUBNET=172.100.1.0/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
[root@k8smaster ~]# systemctl daemon-reload
[root@k8smaster ~]# systemctl restart kubelet.service
[root@k8smaster ~]# systemctl restart docker.service
k8smatser:
[root@k8smaster ~]# kubeadm init --apiserver-advertise-address 192.168.43.45 --pod-network-cidr=10.244.0.0/16
... ...
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
#如果需要以regular身份加入集羣,則需要運行以下命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster. #如果需要以pod身份加入集羣,則需要運行以下命令。
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.43.45:6443 --token 0fis6i.w2d9ztnl3rr2d4r9 \
--discovery-token-ca-cert-hash sha256:f6c54c79d350a008365e6059dd0da8e3171120ac4126397e3d61923d7a066351
[root@k8smaster ~]# mkdir -p $HOME/.kube
[root@k8smaster ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8smaster ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8smaster ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@k8smaster ~]# kubectl apply -f ./kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
k8snode1:
kubeadm join 192.168.43.45:6443 --token 0fis6i.w2d9ztnl3rr2d4r9 \
--discovery-token-ca-cert-hash sha256:f6c54c79d350a008365e6059dd0da8e3171120ac4126397e3d61923d7a066351
k8snode2:
kubeadm join 192.168.43.45:6443 --token 0fis6i.w2d9ztnl3rr2d4r9 \
--discovery-token-ca-cert-hash sha256:f6c54c79d350a008365e6059dd0da8e3171120ac4126397e3d61923d7a066351
k8smatser:
[root@k8smaster ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8smaster Ready master 42m v1.16.2
k8snode1 NotReady <none> 111s v1.16.2
k8snode2 NotReady <none> 23s v1.16.2
解決Warning(k8smatser,k8snode1,k8snode2):
[root@k8smaster ~]# vim /etc/docker/daemon.json
添加:
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
解決kubernetes不能使用tab的問題(k8smatser,k8snode1,k8snode2):
[root@k8smaster ~]# source /usr/share/bash-completion/bash_completion
[root@k8smaster ~]# source <(kubectl completion bash)