基礎環境準備
接下來進入安裝的過程
1.安裝必要的一些系統工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
添加軟件源信息
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
更新並安裝Docker-CE
sudo yum makecache fast
sudo yum -y install docker-ce
開啓Docker服務
sudo service docker start
安裝完成後輸入docker version查看docker是否安裝成功
[root@k8s-master ~]# docker version
Client: Docker Engine - Community
Version: 19.03.9
API version: 1.40
Go version: go1.13.10
Git commit: 9d988398e7
Built: Fri May 15 00:25:27 2020
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.9
API version: 1.40 (minimum version 1.12)
Go version: go1.13.10
Git commit: 9d988398e7
Built: Fri May 15 00:24:05 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
所有節點都需要安裝docker,在我的環境中後續準備使用虛擬機克隆的方式,省略這些軟件的安裝過程
[root@ken ~]# vi /etc/docker/daemon.json
{
"registry-mirrors": ["https://XXX.mirror.aliyuncs.com"]
}
每個節點都需要使docker開機自啓
[root@ken ~]# systemctl restart docker
[root@ken ~]# systemctl enable docker
2.安裝K8S相關軟件
[root@k8s-master yum.repos.d]# vi kubernetes.repo
[k8s]
name=k8s
enabled=1
gpgcheck=0
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
yum install kubelet kubeadm kubectl -y
啓動kubelete
systemctl enable kubelet
修改hosts文件
這裏可以提前添加flannel網絡所需要連接的raw.githubusercontent.com,可能是因爲防火牆的原因,dns這個域名是沒有結果的,可以通過hosts綁定ip地址連接
[root@k8s-master etc]# cat hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.237.201 k8s-master
192.168.237.202 k8s-node-1
192.168.237.203 k8s-node-2
151.101.76.133 raw.githubusercontent.com
echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
關閉swap
這裏不關閉的話,在初始化使用kubeadm 時會報錯
swapoff -a && sysctl -w vm.swappiness=0
free -m
關閉防火牆
[root@k8s-master etc]# systemctl stop firewalld
[root@k8s-master etc]# systemctl disable firewalld
關閉SELINUX
[root@k8s-master etc]# setenforce 0
[root@k8s-master etc]# getenforce
Permissive
可通過修改/etc/selinux/config 文件,實現selinux開機自動關閉
[root@k8s-master etc]# cd /etc/selinux/
[root@k8s-master selinux]# ls
config final semanage.conf targeted tmp
[root@k8s-master selinux]# cat config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of three two values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
將它後面的值修改爲SELINUX=enforcing ——》permissive或者disabled
swapoff -a && sysctl -w vm.swappiness=0
echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
初始化master
kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.18.2 --apiserver-advertise-address 192.168.237.201 --pod-network-cidr=10.244.0.0/16
--apiserver-advertise-address 指明用 Master 的哪個 interface 與 Cluster 的其他節點通信。如果 Master 有多個 interface,建議明確指定,如果不指定,kubeadm 會自動選擇有默認網關的 interface。
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.237.201:6443 --token dxdsdg.n9w1raqsv1pfu025 \
--discovery-token-ca-cert-hash sha256:4c60937f7add85d164e23ca927bcb3b35cd6820331e02bf9b310d3f727a8cde6
注意上述的祕鑰很重要,其他node節點,加入集羣時需要用到
爲了使用更便捷,啓用 kubectl 命令的自動補全功能。
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
可以嘗試執行一些kubectl命令
[root@k8s-master ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
3.安裝pod網絡,選擇flannel
這裏前面提到raw.githubusercontent.com該網址無法dns解析,通過修改hosts指定IP地址,可實現該網址的訪問
[root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
這樣,我們可以看到master這個node已經初始化完畢了
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 14m v1.18.3
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-7ff77c879f-lvbb7 1/1 Running 0 13m
coredns-7ff77c879f-vpn7w 1/1 Running 0 13m
etcd-k8s-master 1/1 Running 0 14m
kube-apiserver-k8s-master 1/1 Running 0 14m
kube-controller-manager-k8s-master 1/1 Running 0 14m
kube-flannel-ds-amd64-4wwsh 1/1 Running 0 4m49s
kube-proxy-ddpx6 1/1 Running 0 13m
kube-scheduler-k8s-master 1/1 Running 0 14m
在K8S集羣中,添加其他node
echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
關閉swap
swapoff -a && sysctl -w vm.swappiness=0
free -m
2.添加nodes
之後根據前面master初始化輸出的結果,執行下面的命令,加入cluster集羣
這裏的--token 來自前面kubeadm init輸出提示,如果當時沒有記錄下來可以通過kubeadm token list 查看。
kubeadm join 192.168.237.201:6443 --token dxdsdg.n9w1raqsv1pfu025 --discovery-token-ca-cert-hash sha256:4c60937f7add85d164e23ca927bcb3b35cd6820331e02bf9b310d3f727a8cde6
這時在master節點上看,kubectl get nodes,可以看到nodes沒有ready
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 33m v1.18.3
k8s-node-1 NotReady <none> 86s v1.18.3
[root@k8s-master ~]#
這裏其實需要等一會,這個node1節點纔會變成Ready狀態
網上其他教程提示node節點需要下載四個鏡像flannel coredns kube-proxy pause
實際查看只有三個
[root@k8s-node-1 yum.repos.d]# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-proxy v1.18.2 0d40868643c6 5 weeks ago 117MB
quay.io/coreos/flannel v0.12.0-amd64 4e9f801d2217 2 months ago 52.8MB
registry.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 3 months ago
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 53m v1.18.3
k8s-node-1 Ready <none> 6m56s v1.18.3
對於node2節點,也是採用同樣的操作,可以將node2節點加入到cluster集羣中
3.一些提示
kubeadm reset
有時node加入cluster會報錯提示/proc/sys/net/ipv4/ip_forward問題,執行
echo 1 > /proc/sys/net/ipv4/ip_forward
若之前加入的node始終沒有ready,可以刪除node
kubectl delete node host1
之後在node上重新執行
kubeadm join 192.168.237.201:6443 ...
4.最終添加nodes效果
[root@k8s-node-2 ~]# kubeadm join 192.168.237.201:6443 --token dxdsdg.n9w1raqsv1pfu025 --discovery-token-ca-cert-hash sha256:4c60937f7add85d164e23ca927bcb3b35cd6820331e02bf9b310d3f727a8cde6
W0522 22:56:06.696722 4820 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 3h25m v1.18.3
k8s-node-1 Ready <none> 132m v1.18.3
k8s-node-2 Ready <none> 137m v1.18.3
k8s集羣初步建立完成