k8s部署(kubeadm方式)一

1整體部署情況:本次環境搭建一主多從,每臺服務器都安裝docker、kubeadm、kubectl、kubelet                                                                                  master:192.168.17.39      kube-apiserver       kube-scheduler       kube-controller-mananger                                                                                                          node: 192.168.17.41                                                                                                                                                                                                                          node: 192.168.17.42

2環境初始化:防火牆  主機名解析  swap分區    selinux    時間同步(三臺機器都操作)

[root@39 ~]# systemctl stop firewalld               #關閉防火牆
[root@39 ~]# systemctl disable firewalld
[root@39 ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config       #關閉selinux
[root@39 ~]# setenforce 0
[root@39 ~]# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab         #關閉swap         
[root@39 ~]# cat /etc/sysctl.d/k8s.conf        #將ipv4的流量傳遞到iptables鏈               
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0 
[root@39 ~]# modprobe br_netfilter            #加載模塊
[root@39 ~]# lsmod | grep br_netfilter
[root@39 ~]# sysctl --system                  #同步
[root@39 ~]# yum install ntpdate -y
[root@39 ~]# ntpdate ntp1.aliyun.com            #更新時間同步

3、設置主機名並添加hosts,方便主機直接調用解析,企業建議用內部DNS。   三臺都需要執行。

設置192.168.17.39的主機名及解析:
[root@39 ~]# hostnamectl set-hostname k8s-master
[root@39 ~]# cat >> /etc/hosts << EOF
> 192.168.17.39 k8s-master
> 192.168.17.41 k8s-node1
> 192.168.17.42 k8s-node2
> EOF
設置192.168.17.41的主機名及解析:
[root@39 ~]# hostnamectl set-hostname k8s-node1
[root@39 ~]# cat >> /etc/hosts << EOF
> 192.168.17.39 k8s-master
> 192.168.17.41 k8s-node1
> 192.168.17.42 k8s-no
設置192.168.17.41的主機名及解析:
[root@39 ~]# hostnamectl set-hostname k8s-node2
[root@39 ~]# cat >> /etc/hosts << EOF
> 192.168.17.39 k8s-master
> 192.168.17.41 k8s-node1
> 192.168.17.42 k8s-no

4、配置ipvs,在kubernetes有兩種代理類型,一種是基於iptables,一種是ipvs,ipvs的性能要高於iptables,使用它需要手動載入ipvs模塊。 三臺都需要執行。

[root@39 ~]# yum -y install ipset ipvsadm
[root@39 ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
> #!/bin/bash
> modprobe -- ip_vs
> modprobe -- ip_vs_rr
> modprobe -- ip_vs_wrr
> modprobe -- ip_vs_sh
> modprobe -- nf_conntrack_ipv4
> EOF
#授權、運行並檢查是否加載:
[root@k8s-master ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules 
[root@k8s-master ~]# sh /etc/sysconfig/modules/ipvs.modules 
[root@k8s-master ~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4

註釋:此時需要重啓三臺機器: reboot

2、開始安裝:在所有節點上安裝:三臺機器

1、安裝docker-ce

[root@k8s-master ~]# curl https://download.docker.com/linux/centos/docker-ce.repo -o  /etc/yum.repos.d/docker.repo
[root@k8s-master ~]#  yum install -y docker-ce
[root@k8s-master ~]# systemctl enable docker
[root@k8s-master ~]# systemctl start docker
[root@k8s-master ~]# systemctl daemon-reload

2、安裝kubeadm、kubelet、kubectl

註釋:本次使用的版本是1.20.5,因爲用最新版本v1.21.0時,阿里源倉庫裏沒有coredns:1.8.0包,導致下載不成功了。

添加阿里雲的yum源,由於kubernetes源在國外,換成阿里雲源來安裝。

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安裝kubeadm、kubelet  kubectl    (要讓kubelet開啓自啓動.現在還不能手動去啓動它)

[root@k8s-master ~]# yum install -y kubelet-1.20.5 kubeadm-1.20.5 kubectl-1.20.5
[root@k8s-master ~]# systemctl enable kubelet

爲了實現Docker使用的cgroup drvier和kubelet使用的cgroup drver一致,修改"/etc/sysconfig/kubelet"文件的內容:

vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd

查看k8s所需鏡像:       kubeadm config images list

[root@k8s-master ~]# kubeadm config images list
I0413 13:50:04.126129   27890 version.go:254] remote version is much newer: v1.21.0; falling back to: stable-1.20
k8s.gcr.io/kube-apiserver:v1.20.5
k8s.gcr.io/kube-controller-manager:v1.20.5
k8s.gcr.io/kube-scheduler:v1.20.5
k8s.gcr.io/kube-proxy:v1.20.5
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0

1、部署k8s的master節點:只在192.168.17.39 master上操作。

#初始化Master節點:
kubeadm init \
--kubernetes-version v1.20.5 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 \
--apiserver-advertise-address=192.168.17.39 \
--ignore-preflight-errors=Swap \
--image-repository registry.aliyuncs.com/google_containers 
#apiserver修改成master節點地址,默認apiserver監聽所有地址的6443端口
#如果swap報錯,加上選項--ignore-preflight-errors=Swap,注意那個Swap的S大寫
#默認拉取鏡像地址k8s.gcr.io國內無法訪問,這裏需要指定阿里雲鏡像倉庫地址
#注意kubernetes的版本要和之前安裝的版本一致
#pod-network-cidr指定pod使用的網絡地址,通常與要部署的flannel、calico保持一致,10.244.0.0/16是默認地址
service-cidr指定爲Service分配使用的網絡地址,它由kubernetes管理,默認即爲10.96.0.0/12
----------------------------------------------------
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
  export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.17.39:6443 --token orc9od.fmr986btt3ubqvzb \
    --discovery-token-ca-cert-hash sha256:6f8506dc12245404bafc9e5c9a858a594a0ef5b51f241d91e95179d237a3a134 

根據提示,在Master節點上使用kubectl工具:

[root@k8s-master ~]#  mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

測試:kubectl  get  cs(componentstatus的簡寫)                                 kubectl  get  nodes

[root@k8s-master ~]# kubectl  get  cs         #獲取組件信息
[root@k8s-master ~]# kubectl get nodes        #查詢集羣node信息
NAME         STATUS     ROLES                  AGE    VERSION
k8s-master   NotReady   control-plane,master   123m   v1.20.5

token失效解決:

因爲默認 token 的有效期爲24小時,過期之後該token就不可用了, 後續有node節點加入,可以重新生成新的token:kubeadm token create

kubeadm token create --ttl 0 --print-join-command             創建一個永不過期的token

[root@k8s-master ~]# kubeadm token create
a59dx0.e6zzg6qoqke6y6er
[root@k8s-master ~]# kubeadm token create --ttl 0 --print-join-command

2、部署k8s node節點(要在192.168.17.41和192.168.17.42上操作)

[root@k8s-node1 kubernetes]# kubeadm join 192.168.17.39:6443 --token orc9od.fmr986btt3ubqvzb     --discovery-token-ca-cert-hash sha256:6f8506dc12245404bafc9e5c9a858a594a0ef5b51f241d91e95179d237a3a134 
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

3、在master查詢節點狀態:kubectl get nodes

[root@k8s-master ~]# kubectl get cs
^[[AWarning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES                  AGE     VERSION
k8s-master   NotReady   control-plane,master   6h30m   v1.20.5
k8s-node1    NotReady   <none>                 43m     v1.20.5
k8s-node2    NotReady   <none>                 42m     v1.20.5

4、安裝CIN網絡插件(k8s支持多種網絡插件,比如flannel、calico、canal等),如上圖所示:不安裝網絡插件狀態時 notready。                                                     在master節點上安裝即可,其他節點加入k8s集羣后會自動安裝。

 

 

 

 

 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章