CentOS 7.4使用kubeadm部署Kubernetes集群

文章目录

1.环境准备

ip hostname
10.180.249.215 master215.k8s
10.180.249.216 node216.k8s
10.180.249.217 node217.k8s

1.1 关闭防火墙(所有节点)

systemctl stop firewalld && systemctl disable firewalld
在这里插入图片描述

1.2 关闭 Selinux(所有节点)

setenforce 0
sed -i ‘s/SELINUX=enforcing/SELINUX=disabled/g’ /etc/selinux/config
在这里插入图片描述

1.3 关闭 swap(所有节点)

swapoff -a
sed -i ‘s/^.*swap/#&/g’ /etc/fstab
在这里插入图片描述

1.4 配置 hosts(所有节点)

cat >> /etc/hosts <<EOF
10.180.249.215 master215.k8s
10.180.249.216 node216.k8s
10.180.249.217 node217.k8s
EOF
在这里插入图片描述

1.5 安装 ntpd 服务(所有节点)

yum install -y ntp ntpdate

master215.k8s 节点
vim /etc/ntp.conf
restrict 10.180.249.254 mask 255.255.254.0 nomodify notrap
server 127.127.1.0
fudge 127.127.1.0 stratum 8
// 注:10.180.249.254 和 255.255.254.0是集群所在网段的网关和子网掩码

node216.k8s 节点
vim /etc/ntp.conf
restrict 10.180.249.254 mask 255.255.254.0 nomodify notrap
server master215.k8s prefer
server 127.127.1.0
fudge 127.127.1.0 stratum 9

node217.k8s 节点
vim /etc/ntp.conf
server master215.k8s prefer
server node216.k8s

在 master215.k8s 启动 ntp 之后,其余各节点启动 ntp 服务之前,node216.k8s 和 node217.k8s 节点执行命令,同步 master215.k8s 时间
启动 master215.k8s 节点 ntpd 服务
systemctl start ntpd && systemctl enable ntpd

在 node216.k8s 和 node217.k8s 节点进行时间同步
ntpdate master215.k8s

启动 node216.k8s 和 node217.k8s 节点ntpd服务
systemctl start ntpd && systemctl enable ntpd

查看ntp状态
ntpq -p
’*’ 表示当前使用的时钟源,’+’ 表示这些源可作为 NTP 源

1.6 配置 SSH 免密(master215.k8s)

集群可以不配置免密,这个没有要求
ssh-keygen -t rsa
ssh-copy-id master215.k8s
ssh-copy-id node216.k8s
ssh-copy-id node217.k8s

1.7 调整内核参数(所有节点)

cat > /etc/sysctl.d/kubernetes.conf << EOF
#开启ipv4的过滤规则
net.bridge.bridge-nf-call-iptables=1
#开启ipv6的过滤规则
net.bridge.bridge-nf-call-ip6tables=1
#开启服务器的路由转发功能
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
#禁止使用swap空间,只有当系统OOM时才允许使用它
vm.swappiness=0
#不检查物理内存是否够用
vm.overcommit_memory=1
#不开启OOM
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
#net.netfilter.nf_conntrack_max=2310720
EOF

#执行命令使修改生效。
modprobe br_netfilter
sysctl -p /etc/sysctl.d/kubernetes.conf

1.8 设置 rsyslogd 和 systemd journald(所有节点)

#持久化保存日志的目录
mkdir /var/log/journal
mkdir /etc/systemd/journald.conf.d
cat > /etc/systemd/journald.conf.d/99-prophet.conf << EOF
[Journal]
#持久化保存到磁盘
Storage=persistent
#压缩历史日志
Compress=yes
SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000
#最大占用空间10G
SystemMaxUse=10G
#单日志文件最大200M
SystemMaxFileSize=200M
#日志保存时间2周
MaxRetentionSec=2week
#不将日志转发到syslog
ForwardToSyslog=no
EOF

systemctl restart systemd-journald

1.9 启动 ipvs 模块(所有节点)

环境依赖
yum install -y ipset ipvsadm

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
在这里插入图片描述

2.安装Docker

2.1 卸载旧版本Docker(所有节点)

sudo yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine

2.2 配置阿里软件源(所有节点)

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache fast -y
yum install epel-release container-selinux -y

2.3 安装 Docker(所有节点)

yum update -y && yum install -y docker-ce

登录https://www.aliyun.com/
选择产品与服务 - - 容器镜像服务 - - 镜像加速 - - 镜像加速器 - - CentOS
mkdir /etc/docker
cat > /etc/docker/daemon.json << EOF
{
“registry-mirrors”: [“https://p3fsffkg.mirror.aliyuncs.com”]
}
EOF

systemctl daemon-reload && systemctl restart docker && systemctl enable docker

3.安装Kubernetes(K8s)集群

3.1 设置K8s 软件源(所有节点)

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
EOF

yum -y install kubeadm kubectl kubelet
systemctl start kubelet && systemctl enable kubelet

3.2 初始化主节点(master215.k8s)

查看需要下载的镜像
kubeadm config images list
在这里插入图片描述

下载镜像时,报错
kubeadm config images pull
在这里插入图片描述

3.3 下载镜像到本地(master215.k8s)

选择产品与服务 - - 容器镜像服务 - - 镜像中心 - - 镜像搜索 - - google_containers/kube_apiserver
在这里插入图片描述
在这里插入图片描述

3.3.1 登录阿里云账号(master215.k8s)

docker login --username=你的阿里云账号 registry.cn-beijing.aliyuncs.com
在这里插入图片描述

3.3.2 下载镜像(master215.k8s)

根据阿里云公网地址 和需要下载的镜像进行下载
docker image pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.2
docker image pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.2
docker image pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.2
docker image pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.2
docker image pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
docker image pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0
docker image pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.7
在这里插入图片描述

3.4 下载镜像打tag(master215.k8s)

在这里插入图片描述

docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.2 k8s.gcr.io/kube-apiserver:v1.18.2
docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.2 k8s.gcr.io/kube-controller-manager:v1.18.2
docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.2 k8s.gcr.io/kube-scheduler:v1.18.2
docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.2 k8s.gcr.io/kube-proxy:v1.18.2
docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2
docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.7 k8s.gcr.io/coredns:1.6.7
在这里插入图片描述

3.5 删除之前下载的镜像(master215.k8s)

docker image rm -f registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.2
docker image rm -f registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.2
docker image rm -f registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.2
docker image rm -f registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.2
docker image rm -f registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
docker image rm -f registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0
docker image rm -f registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.7
在这里插入图片描述

4.初始化K8s集群(master215.k8s)

初始化过程会自动去下载镜像,由于已经手动把镜像提前下载到本地,所以自动下载镜像步骤省略
--dry-run 参数,运行命令将会把 kubeadm 做的事情输出到标准输出,但是不会实际部署任何东西。

kubeadm init --kubernetes-version=v1.18.2 --pod-network-cidr=10.244.0.0/16 --dry-run #不会实际部署
kubeadm init --kubernetes-version=v1.18.2 --pod-network-cidr=10.244.0.0/16 #实际部署命令
在这里插入图片描述

初始化完成后,根据提示完成以下操作

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

4.1 加入主节点(node216.k8s、node217.k8s)

在 node216.k8s、node217.k8s 节点分别执行以下命令:
kubeadm join 10.180.249.215:6443 --token czuixj.nr037kr9mjv6247l
–discovery-token-ca-cert-hash sha256:61ca18ced02af8764d6db8f7ec5dd0716b218e8590cc19b3f5384ed2207a6fdf

在这里插入图片描述

5.部署网络插件(master215.k8s)

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

6.将master节点镜像打包并发送到node节点(master215.k8s)

docker image save k8s.gcr.io/kube-proxy:v1.18.2 k8s.gcr.io/pause:3.2 quay.io/coreos/flannel:v0.12.0-amd64 -o k8s-node-v1.18.2.tar

scp k8s-node-v1.18.2.tar node216.k8s:PWDscpk8snodev1.18.2.tarnode217.k8s:PWD scp k8s-node-v1.18.2.tar node217.k8s:PWD

6.1 node节点导入镜像(node216.k8s、node217.k8s)

[root@node216 ~]# docker image load -i k8s-node-v1.18.2.tar
[root@node217 ~]# docker image load -i k8s-node-v1.18.2.tar
在这里插入图片描述

6.2.检查节点(master215.k8s)

检查集群节点
kubectl get nodes
在这里插入图片描述

检查pods
kubectl get pods -n kube-system
在这里插入图片描述

7.配置node节点管理api server服务器

7.1 node节点创建~.kube目录(node216.k8s、node217.k8s)

mkdir -p $HOME/.kube

7.2 拷贝master节点admin.conf配置文件到node节点并重命名config(master215.k8s)

scp /etc/kubernetes/admin.conf node216.k8s:~/.kube/config
scp /etc/kubernetes/admin.conf node217.k8s:~/.kube/config

8.Harbor仓库搭建(master215.k8s)

8.1 安装包下载(master215.k8s)

8.1.1 harbor-offline-installer-v1.9.4.tgz

https://github.com/goharbor/harbor/releases
在这里插入图片描述

8.1.2 docker-compose-Linux-x86_64

在这里插入图片描述

执行命令下载安装:
curl -L "https://github.com/docker/compose/releases/download/1.26.0-rc4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/sbin/docker-compose && chmod +x /usr/sbin/docker-compose

或者手动下载,手动安装
wget https://github.com/docker/compose/releases/download/1.26.0-rc4/docker-compose-Linux-x86_64
cp docker-compose-Linux-x86_64 /usr/sbin/docker-compose
chmod +x /usr/sbin/docker-compose

8.2 安装 Harbor(master215.k8s)

tar -zxvf harbor-offline-installer-v1.9.4.tgz -C /opt/
cd /opt/harbor/
sed -i ‘s/reg.mydomain.com/master215.k8s/g’ harbor.yml
sed -i ‘s/80/88/’ harbor.yml
./prepare
./install.sh

8.3 修改 docker 客户端配置文件(所有节点)

vim /etc/docker/daemon.json

{
  "registry-mirrors": ["https://p3fsffkg.mirror.aliyuncs.com"],
  "insecure-registries": ["http://master215.k8s:88"]
}

systemctl restart docker

重启 docker-compose (master215.k8s)
cd /opt/harbor/
docker-compose stop
docker-compose start

8.4 访问

http://master215.k8s:88/
username:admin
passwoord:Harbor12345
在这里插入图片描述
在这里插入图片描述

9.上传镜像到 Harbor

在 Harbor 仓库中创建项目:k8s
在这里插入图片描述

9.1 登录到 Harbor

docker login master215.k8s:88
在这里插入图片描述

9.2 打标签并上传镜像

docker tag k8s.gcr.io/kube-proxy:v1.18.2 master215.k8s:88/k8s/k8s.gcr.io/kube-proxy:v1.18.2
docker push master215.k8s:88/k8s/k8s.gcr.io/kube-proxy:v1.18.2
在这里插入图片描述


docker tag k8s.gcr.io/kube-scheduler:v1.18.2 master215.k8s:88/k8s/k8s.gcr.io/kube-scheduler:v1.18.2
docker tag k8s.gcr.io/kube-apiserver:v1.18.2 master215.k8s:88/k8s/k8s.gcr.io/kube-apiserver:v1.18.2
docker tag k8s.gcr.io/kube-controller-manager:v1.18.2 master215.k8s:88/k8s/k8s.gcr.io/kube-controller-manager:v1.18.2
docker tag quay.io/coreos/flannel:v0.12.0-amd64 master215.k8s:88/k8s/quay.io/coreos/flannel:v0.12.0-amd64
docker tag k8s.gcr.io/pause:3.2 master215.k8s:88/k8s/k8s.gcr.io/pause:3.2
docker tag k8s.gcr.io/coredns:1.6.7 master215.k8s:88/k8s/k8s.gcr.io/coredns:1.6.7
docker tag goharbor/chartmuseum-photon:v0.9.0-v1.9.4 master215.k8s:88/k8s/goharbor/chartmuseum-photon:v0.9.0-v1.9.4
docker tag goharbor/harbor-migrator:v1.9.4 master215.k8s:88/k8s/goharbor/harbor-migrator:v1.9.4
docker tag goharbor/redis-photon:v1.9.4 master215.k8s:88/k8s/goharbor/redis-photon:v1.9.4
docker tag goharbor/clair-photon:v2.1.0-v1.9.4 master215.k8s:88/k8s/goharbor/clair-photon:v2.1.0-v1.9.4
docker tag goharbor/notary-server-photon:v0.6.1-v1.9.4 master215.k8s:88/k8s/goharbor/notary-server-photon:v0.6.1-v1.9.4
docker tag goharbor/notary-signer-photon:v0.6.1-v1.9.4 master215.k8s:88/k8s/goharbor/notary-signer-photon:v0.6.1-v1.9.4
docker tag goharbor/harbor-registryctl:v1.9.4 master215.k8s:88/k8s/goharbor/harbor-registryctl:v1.9.4
docker tag goharbor/registry-photon:v2.7.1-patch-2819-2553-v1.9.4 master215.k8s:88/k8s/goharbor/registry-photon:v2.7.1-patch-2819-2553-v1.9.4
docker tag goharbor/nginx-photon:v1.9.4 master215.k8s:88/k8s/goharbor/nginx-photon:v1.9.4
docker tag goharbor/harbor-log:v1.9.4 master215.k8s:88/k8s/goharbor/harbor-log:v1.9.4
docker tag goharbor/harbor-jobservice:v1.9.4 master215.k8s:88/k8s/goharbor/harbor-jobservice:v1.9.4
docker tag goharbor/harbor-core:v1.9.4 master215.k8s:88/k8s/goharbor/harbor-core:v1.9.4
docker tag goharbor/harbor-portal:v1.9.4 master215.k8s:88/k8s/goharbor/harbor-portal:v1.9.4
docker tag goharbor/harbor-db:v1.9.4 master215.k8s:88/k8s/goharbor/harbor-db:v1.9.4
docker tag goharbor/prepare:v1.9.4 master215.k8s:88/k8s/goharbor/prepare:v1.9.4
docker tag k8s.gcr.io/etcd:3.4.3-0 master215.k8s:88/k8s/k8s.gcr.io/etcd:3.4.3-0



docker push master215.k8s:88/k8s/k8s.gcr.io/kube-scheduler:v1.18.2
docker push master215.k8s:88/k8s/k8s.gcr.io/kube-apiserver:v1.18.2
docker push master215.k8s:88/k8s/k8s.gcr.io/kube-controller-manager:v1.18.2
docker push master215.k8s:88/k8s/quay.io/coreos/flannel:v0.12.0-amd64
docker push master215.k8s:88/k8s/k8s.gcr.io/pause:3.2
docker push master215.k8s:88/k8s/k8s.gcr.io/coredns:1.6.7
docker push master215.k8s:88/k8s/goharbor/chartmuseum-photon:v0.9.0-v1.9.4
docker push master215.k8s:88/k8s/goharbor/harbor-migrator:v1.9.4
docker push master215.k8s:88/k8s/goharbor/redis-photon:v1.9.4
docker push master215.k8s:88/k8s/goharbor/clair-photon:v2.1.0-v1.9.4
docker push master215.k8s:88/k8s/goharbor/notary-server-photon:v0.6.1-v1.9.4
docker push master215.k8s:88/k8s/goharbor/notary-signer-photon:v0.6.1-v1.9.4
docker push master215.k8s:88/k8s/goharbor/harbor-registryctl:v1.9.4
docker push master215.k8s:88/k8s/goharbor/registry-photon:v2.7.1-patch-2819-2553-v1.9.4
docker push master215.k8s:88/k8s/goharbor/nginx-photon:v1.9.4
docker push master215.k8s:88/k8s/goharbor/harbor-log:v1.9.4
docker push master215.k8s:88/k8s/goharbor/harbor-jobservice:v1.9.4
docker push master215.k8s:88/k8s/goharbor/harbor-core:v1.9.4
docker push master215.k8s:88/k8s/goharbor/harbor-portal:v1.9.4
docker push master215.k8s:88/k8s/goharbor/harbor-db:v1.9.4
docker push master215.k8s:88/k8s/goharbor/prepare:v1.9.4
docker push master215.k8s:88/k8s/k8s.gcr.io/etcd:3.4.3-0

在这里插入图片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章