k8s部署(kubeadm方式)一

1整体部署情况:本次环境搭建一主多从,每台服务器都安装docker、kubeadm、kubectl、kubelet                                                                                  master:192.168.17.39      kube-apiserver       kube-scheduler       kube-controller-mananger                                                                                                          node: 192.168.17.41                                                                                                                                                                                                                          node: 192.168.17.42

2环境初始化:防火墙  主机名解析  swap分区    selinux    时间同步(三台机器都操作)

[root@39 ~]# systemctl stop firewalld               #关闭防火墙
[root@39 ~]# systemctl disable firewalld
[root@39 ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config       #关闭selinux
[root@39 ~]# setenforce 0
[root@39 ~]# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab         #关闭swap         
[root@39 ~]# cat /etc/sysctl.d/k8s.conf        #将ipv4的流量传递到iptables链               
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0 
[root@39 ~]# modprobe br_netfilter            #加载模块
[root@39 ~]# lsmod | grep br_netfilter
[root@39 ~]# sysctl --system                  #同步
[root@39 ~]# yum install ntpdate -y
[root@39 ~]# ntpdate ntp1.aliyun.com            #更新时间同步

3、设置主机名并添加hosts,方便主机直接调用解析,企业建议用内部DNS。   三台都需要执行。

设置192.168.17.39的主机名及解析:
[root@39 ~]# hostnamectl set-hostname k8s-master
[root@39 ~]# cat >> /etc/hosts << EOF
> 192.168.17.39 k8s-master
> 192.168.17.41 k8s-node1
> 192.168.17.42 k8s-node2
> EOF
设置192.168.17.41的主机名及解析:
[root@39 ~]# hostnamectl set-hostname k8s-node1
[root@39 ~]# cat >> /etc/hosts << EOF
> 192.168.17.39 k8s-master
> 192.168.17.41 k8s-node1
> 192.168.17.42 k8s-no
设置192.168.17.41的主机名及解析:
[root@39 ~]# hostnamectl set-hostname k8s-node2
[root@39 ~]# cat >> /etc/hosts << EOF
> 192.168.17.39 k8s-master
> 192.168.17.41 k8s-node1
> 192.168.17.42 k8s-no

4、配置ipvs,在kubernetes有两种代理类型,一种是基于iptables,一种是ipvs,ipvs的性能要高于iptables,使用它需要手动载入ipvs模块。 三台都需要执行。

[root@39 ~]# yum -y install ipset ipvsadm
[root@39 ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
> #!/bin/bash
> modprobe -- ip_vs
> modprobe -- ip_vs_rr
> modprobe -- ip_vs_wrr
> modprobe -- ip_vs_sh
> modprobe -- nf_conntrack_ipv4
> EOF
#授权、运行并检查是否加载:
[root@k8s-master ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules 
[root@k8s-master ~]# sh /etc/sysconfig/modules/ipvs.modules 
[root@k8s-master ~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4

注释:此时需要重启三台机器: reboot

2、开始安装:在所有节点上安装:三台机器

1、安装docker-ce

[root@k8s-master ~]# curl https://download.docker.com/linux/centos/docker-ce.repo -o  /etc/yum.repos.d/docker.repo
[root@k8s-master ~]#  yum install -y docker-ce
[root@k8s-master ~]# systemctl enable docker
[root@k8s-master ~]# systemctl start docker
[root@k8s-master ~]# systemctl daemon-reload

2、安装kubeadm、kubelet、kubectl

注释:本次使用的版本是1.20.5,因为用最新版本v1.21.0时,阿里源仓库里没有coredns:1.8.0包,导致下载不成功了。

添加阿里云的yum源,由于kubernetes源在国外,换成阿里云源来安装。

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装kubeadm、kubelet  kubectl    (要让kubelet开启自启动.现在还不能手动去启动它)

[root@k8s-master ~]# yum install -y kubelet-1.20.5 kubeadm-1.20.5 kubectl-1.20.5
[root@k8s-master ~]# systemctl enable kubelet

为了实现Docker使用的cgroup drvier和kubelet使用的cgroup drver一致,修改"/etc/sysconfig/kubelet"文件的内容:

vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd

查看k8s所需镜像:       kubeadm config images list

[root@k8s-master ~]# kubeadm config images list
I0413 13:50:04.126129   27890 version.go:254] remote version is much newer: v1.21.0; falling back to: stable-1.20
k8s.gcr.io/kube-apiserver:v1.20.5
k8s.gcr.io/kube-controller-manager:v1.20.5
k8s.gcr.io/kube-scheduler:v1.20.5
k8s.gcr.io/kube-proxy:v1.20.5
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0

1、部署k8s的master节点:只在192.168.17.39 master上操作。

#初始化Master节点:
kubeadm init \
--kubernetes-version v1.20.5 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 \
--apiserver-advertise-address=192.168.17.39 \
--ignore-preflight-errors=Swap \
--image-repository registry.aliyuncs.com/google_containers 
#apiserver修改成master节点地址,默认apiserver监听所有地址的6443端口
#如果swap报错,加上选项--ignore-preflight-errors=Swap,注意那个Swap的S大写
#默认拉取镜像地址k8s.gcr.io国内无法访问,这里需要指定阿里云镜像仓库地址
#注意kubernetes的版本要和之前安装的版本一致
#pod-network-cidr指定pod使用的网络地址,通常与要部署的flannel、calico保持一致,10.244.0.0/16是默认地址
service-cidr指定为Service分配使用的网络地址,它由kubernetes管理,默认即为10.96.0.0/12
----------------------------------------------------
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
  export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.17.39:6443 --token orc9od.fmr986btt3ubqvzb \
    --discovery-token-ca-cert-hash sha256:6f8506dc12245404bafc9e5c9a858a594a0ef5b51f241d91e95179d237a3a134 

根据提示,在Master节点上使用kubectl工具:

[root@k8s-master ~]#  mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

测试:kubectl  get  cs(componentstatus的简写)                                 kubectl  get  nodes

[root@k8s-master ~]# kubectl  get  cs         #获取组件信息
[root@k8s-master ~]# kubectl get nodes        #查询集群node信息
NAME         STATUS     ROLES                  AGE    VERSION
k8s-master   NotReady   control-plane,master   123m   v1.20.5

token失效解决:

因为默认 token 的有效期为24小时,过期之后该token就不可用了, 后续有node节点加入,可以重新生成新的token:kubeadm token create

kubeadm token create --ttl 0 --print-join-command             创建一个永不过期的token

[root@k8s-master ~]# kubeadm token create
a59dx0.e6zzg6qoqke6y6er
[root@k8s-master ~]# kubeadm token create --ttl 0 --print-join-command

2、部署k8s node节点(要在192.168.17.41和192.168.17.42上操作)

[root@k8s-node1 kubernetes]# kubeadm join 192.168.17.39:6443 --token orc9od.fmr986btt3ubqvzb     --discovery-token-ca-cert-hash sha256:6f8506dc12245404bafc9e5c9a858a594a0ef5b51f241d91e95179d237a3a134 
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

3、在master查询节点状态:kubectl get nodes

[root@k8s-master ~]# kubectl get cs
^[[AWarning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES                  AGE     VERSION
k8s-master   NotReady   control-plane,master   6h30m   v1.20.5
k8s-node1    NotReady   <none>                 43m     v1.20.5
k8s-node2    NotReady   <none>                 42m     v1.20.5

4、安装CIN网络插件(k8s支持多种网络插件,比如flannel、calico、canal等),如上图所示:不安装网络插件状态时 notready。                                                     在master节点上安装即可,其他节点加入k8s集群后会自动安装。

 

 

 

 

 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章