CentOS 7.4使用kubeadm部署Kubernetes集羣

文章目錄

1.環境準備

ip hostname
10.180.249.215 master215.k8s
10.180.249.216 node216.k8s
10.180.249.217 node217.k8s

1.1 關閉防火牆(所有節點)

systemctl stop firewalld && systemctl disable firewalld
在這裏插入圖片描述

1.2 關閉 Selinux(所有節點)

setenforce 0
sed -i ‘s/SELINUX=enforcing/SELINUX=disabled/g’ /etc/selinux/config
在這裏插入圖片描述

1.3 關閉 swap(所有節點)

swapoff -a
sed -i ‘s/^.*swap/#&/g’ /etc/fstab
在這裏插入圖片描述

1.4 配置 hosts(所有節點)

cat >> /etc/hosts <<EOF
10.180.249.215 master215.k8s
10.180.249.216 node216.k8s
10.180.249.217 node217.k8s
EOF
在這裏插入圖片描述

1.5 安裝 ntpd 服務(所有節點)

yum install -y ntp ntpdate

master215.k8s 節點
vim /etc/ntp.conf
restrict 10.180.249.254 mask 255.255.254.0 nomodify notrap
server 127.127.1.0
fudge 127.127.1.0 stratum 8
// 注:10.180.249.254 和 255.255.254.0是集羣所在網段的網關和子網掩碼

node216.k8s 節點
vim /etc/ntp.conf
restrict 10.180.249.254 mask 255.255.254.0 nomodify notrap
server master215.k8s prefer
server 127.127.1.0
fudge 127.127.1.0 stratum 9

node217.k8s 節點
vim /etc/ntp.conf
server master215.k8s prefer
server node216.k8s

在 master215.k8s 啓動 ntp 之後,其餘各節點啓動 ntp 服務之前,node216.k8s 和 node217.k8s 節點執行命令,同步 master215.k8s 時間
啓動 master215.k8s 節點 ntpd 服務
systemctl start ntpd && systemctl enable ntpd

在 node216.k8s 和 node217.k8s 節點進行時間同步
ntpdate master215.k8s

啓動 node216.k8s 和 node217.k8s 節點ntpd服務
systemctl start ntpd && systemctl enable ntpd

查看ntp狀態
ntpq -p
’*’ 表示當前使用的時鐘源,’+’ 表示這些源可作爲 NTP 源

1.6 配置 SSH 免密(master215.k8s)

集羣可以不配置免密,這個沒有要求
ssh-keygen -t rsa
ssh-copy-id master215.k8s
ssh-copy-id node216.k8s
ssh-copy-id node217.k8s

1.7 調整內核參數(所有節點)

cat > /etc/sysctl.d/kubernetes.conf << EOF
#開啓ipv4的過濾規則
net.bridge.bridge-nf-call-iptables=1
#開啓ipv6的過濾規則
net.bridge.bridge-nf-call-ip6tables=1
#開啓服務器的路由轉發功能
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
#禁止使用swap空間,只有當系統OOM時才允許使用它
vm.swappiness=0
#不檢查物理內存是否夠用
vm.overcommit_memory=1
#不開啓OOM
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
#net.netfilter.nf_conntrack_max=2310720
EOF

#執行命令使修改生效。
modprobe br_netfilter
sysctl -p /etc/sysctl.d/kubernetes.conf

1.8 設置 rsyslogd 和 systemd journald(所有節點)

#持久化保存日誌的目錄
mkdir /var/log/journal
mkdir /etc/systemd/journald.conf.d
cat > /etc/systemd/journald.conf.d/99-prophet.conf << EOF
[Journal]
#持久化保存到磁盤
Storage=persistent
#壓縮歷史日誌
Compress=yes
SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000
#最大佔用空間10G
SystemMaxUse=10G
#單日誌文件最大200M
SystemMaxFileSize=200M
#日誌保存時間2周
MaxRetentionSec=2week
#不將日誌轉發到syslog
ForwardToSyslog=no
EOF

systemctl restart systemd-journald

1.9 啓動 ipvs 模塊(所有節點)

環境依賴
yum install -y ipset ipvsadm

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
在這裏插入圖片描述

2.安裝Docker

2.1 卸載舊版本Docker(所有節點)

sudo yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine

2.2 配置阿里軟件源(所有節點)

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache fast -y
yum install epel-release container-selinux -y

2.3 安裝 Docker(所有節點)

yum update -y && yum install -y docker-ce

登錄https://www.aliyun.com/
選擇產品與服務 - - 容器鏡像服務 - - 鏡像加速 - - 鏡像加速器 - - CentOS
mkdir /etc/docker
cat > /etc/docker/daemon.json << EOF
{
“registry-mirrors”: [“https://p3fsffkg.mirror.aliyuncs.com”]
}
EOF

systemctl daemon-reload && systemctl restart docker && systemctl enable docker

3.安裝Kubernetes(K8s)集羣

3.1 設置K8s 軟件源(所有節點)

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
EOF

yum -y install kubeadm kubectl kubelet
systemctl start kubelet && systemctl enable kubelet

3.2 初始化主節點(master215.k8s)

查看需要下載的鏡像
kubeadm config images list
在這裏插入圖片描述

下載鏡像時,報錯
kubeadm config images pull
在這裏插入圖片描述

3.3 下載鏡像到本地(master215.k8s)

選擇產品與服務 - - 容器鏡像服務 - - 鏡像中心 - - 鏡像搜索 - - google_containers/kube_apiserver
在這裏插入圖片描述
在這裏插入圖片描述

3.3.1 登錄阿里雲賬號(master215.k8s)

docker login --username=你的阿里雲賬號 registry.cn-beijing.aliyuncs.com
在這裏插入圖片描述

3.3.2 下載鏡像(master215.k8s)

根據阿里雲公網地址 和需要下載的鏡像進行下載
docker image pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.2
docker image pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.2
docker image pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.2
docker image pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.2
docker image pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
docker image pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0
docker image pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.7
在這裏插入圖片描述

3.4 下載鏡像打tag(master215.k8s)

在這裏插入圖片描述

docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.2 k8s.gcr.io/kube-apiserver:v1.18.2
docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.2 k8s.gcr.io/kube-controller-manager:v1.18.2
docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.2 k8s.gcr.io/kube-scheduler:v1.18.2
docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.2 k8s.gcr.io/kube-proxy:v1.18.2
docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2
docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.7 k8s.gcr.io/coredns:1.6.7
在這裏插入圖片描述

3.5 刪除之前下載的鏡像(master215.k8s)

docker image rm -f registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.2
docker image rm -f registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.2
docker image rm -f registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.2
docker image rm -f registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.2
docker image rm -f registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
docker image rm -f registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0
docker image rm -f registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.7
在這裏插入圖片描述

4.初始化K8s集羣(master215.k8s)

初始化過程會自動去下載鏡像,由於已經手動把鏡像提前下載到本地,所以自動下載鏡像步驟省略
--dry-run 參數,運行命令將會把 kubeadm 做的事情輸出到標準輸出,但是不會實際部署任何東西。

kubeadm init --kubernetes-version=v1.18.2 --pod-network-cidr=10.244.0.0/16 --dry-run #不會實際部署
kubeadm init --kubernetes-version=v1.18.2 --pod-network-cidr=10.244.0.0/16 #實際部署命令
在這裏插入圖片描述

初始化完成後,根據提示完成以下操作

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

4.1 加入主節點(node216.k8s、node217.k8s)

在 node216.k8s、node217.k8s 節點分別執行以下命令:
kubeadm join 10.180.249.215:6443 --token czuixj.nr037kr9mjv6247l
–discovery-token-ca-cert-hash sha256:61ca18ced02af8764d6db8f7ec5dd0716b218e8590cc19b3f5384ed2207a6fdf

在這裏插入圖片描述

5.部署網絡插件(master215.k8s)

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

6.將master節點鏡像打包併發送到node節點(master215.k8s)

docker image save k8s.gcr.io/kube-proxy:v1.18.2 k8s.gcr.io/pause:3.2 quay.io/coreos/flannel:v0.12.0-amd64 -o k8s-node-v1.18.2.tar

scp k8s-node-v1.18.2.tar node216.k8s:PWDscpk8snodev1.18.2.tarnode217.k8s:PWD scp k8s-node-v1.18.2.tar node217.k8s:PWD

6.1 node節點導入鏡像(node216.k8s、node217.k8s)

[root@node216 ~]# docker image load -i k8s-node-v1.18.2.tar
[root@node217 ~]# docker image load -i k8s-node-v1.18.2.tar
在這裏插入圖片描述

6.2.檢查節點(master215.k8s)

檢查集羣節點
kubectl get nodes
在這裏插入圖片描述

檢查pods
kubectl get pods -n kube-system
在這裏插入圖片描述

7.配置node節點管理api server服務器

7.1 node節點創建~.kube目錄(node216.k8s、node217.k8s)

mkdir -p $HOME/.kube

7.2 拷貝master節點admin.conf配置文件到node節點並重命名config(master215.k8s)

scp /etc/kubernetes/admin.conf node216.k8s:~/.kube/config
scp /etc/kubernetes/admin.conf node217.k8s:~/.kube/config

8.Harbor倉庫搭建(master215.k8s)

8.1 安裝包下載(master215.k8s)

8.1.1 harbor-offline-installer-v1.9.4.tgz

https://github.com/goharbor/harbor/releases
在這裏插入圖片描述

8.1.2 docker-compose-Linux-x86_64

在這裏插入圖片描述

執行命令下載安裝:
curl -L "https://github.com/docker/compose/releases/download/1.26.0-rc4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/sbin/docker-compose && chmod +x /usr/sbin/docker-compose

或者手動下載,手動安裝
wget https://github.com/docker/compose/releases/download/1.26.0-rc4/docker-compose-Linux-x86_64
cp docker-compose-Linux-x86_64 /usr/sbin/docker-compose
chmod +x /usr/sbin/docker-compose

8.2 安裝 Harbor(master215.k8s)

tar -zxvf harbor-offline-installer-v1.9.4.tgz -C /opt/
cd /opt/harbor/
sed -i ‘s/reg.mydomain.com/master215.k8s/g’ harbor.yml
sed -i ‘s/80/88/’ harbor.yml
./prepare
./install.sh

8.3 修改 docker 客戶端配置文件(所有節點)

vim /etc/docker/daemon.json

{
  "registry-mirrors": ["https://p3fsffkg.mirror.aliyuncs.com"],
  "insecure-registries": ["http://master215.k8s:88"]
}

systemctl restart docker

重啓 docker-compose (master215.k8s)
cd /opt/harbor/
docker-compose stop
docker-compose start

8.4 訪問

http://master215.k8s:88/
username:admin
passwoord:Harbor12345
在這裏插入圖片描述
在這裏插入圖片描述

9.上傳鏡像到 Harbor

在 Harbor 倉庫中創建項目:k8s
在這裏插入圖片描述

9.1 登錄到 Harbor

docker login master215.k8s:88
在這裏插入圖片描述

9.2 打標籤並上傳鏡像

docker tag k8s.gcr.io/kube-proxy:v1.18.2 master215.k8s:88/k8s/k8s.gcr.io/kube-proxy:v1.18.2
docker push master215.k8s:88/k8s/k8s.gcr.io/kube-proxy:v1.18.2
在這裏插入圖片描述


docker tag k8s.gcr.io/kube-scheduler:v1.18.2 master215.k8s:88/k8s/k8s.gcr.io/kube-scheduler:v1.18.2
docker tag k8s.gcr.io/kube-apiserver:v1.18.2 master215.k8s:88/k8s/k8s.gcr.io/kube-apiserver:v1.18.2
docker tag k8s.gcr.io/kube-controller-manager:v1.18.2 master215.k8s:88/k8s/k8s.gcr.io/kube-controller-manager:v1.18.2
docker tag quay.io/coreos/flannel:v0.12.0-amd64 master215.k8s:88/k8s/quay.io/coreos/flannel:v0.12.0-amd64
docker tag k8s.gcr.io/pause:3.2 master215.k8s:88/k8s/k8s.gcr.io/pause:3.2
docker tag k8s.gcr.io/coredns:1.6.7 master215.k8s:88/k8s/k8s.gcr.io/coredns:1.6.7
docker tag goharbor/chartmuseum-photon:v0.9.0-v1.9.4 master215.k8s:88/k8s/goharbor/chartmuseum-photon:v0.9.0-v1.9.4
docker tag goharbor/harbor-migrator:v1.9.4 master215.k8s:88/k8s/goharbor/harbor-migrator:v1.9.4
docker tag goharbor/redis-photon:v1.9.4 master215.k8s:88/k8s/goharbor/redis-photon:v1.9.4
docker tag goharbor/clair-photon:v2.1.0-v1.9.4 master215.k8s:88/k8s/goharbor/clair-photon:v2.1.0-v1.9.4
docker tag goharbor/notary-server-photon:v0.6.1-v1.9.4 master215.k8s:88/k8s/goharbor/notary-server-photon:v0.6.1-v1.9.4
docker tag goharbor/notary-signer-photon:v0.6.1-v1.9.4 master215.k8s:88/k8s/goharbor/notary-signer-photon:v0.6.1-v1.9.4
docker tag goharbor/harbor-registryctl:v1.9.4 master215.k8s:88/k8s/goharbor/harbor-registryctl:v1.9.4
docker tag goharbor/registry-photon:v2.7.1-patch-2819-2553-v1.9.4 master215.k8s:88/k8s/goharbor/registry-photon:v2.7.1-patch-2819-2553-v1.9.4
docker tag goharbor/nginx-photon:v1.9.4 master215.k8s:88/k8s/goharbor/nginx-photon:v1.9.4
docker tag goharbor/harbor-log:v1.9.4 master215.k8s:88/k8s/goharbor/harbor-log:v1.9.4
docker tag goharbor/harbor-jobservice:v1.9.4 master215.k8s:88/k8s/goharbor/harbor-jobservice:v1.9.4
docker tag goharbor/harbor-core:v1.9.4 master215.k8s:88/k8s/goharbor/harbor-core:v1.9.4
docker tag goharbor/harbor-portal:v1.9.4 master215.k8s:88/k8s/goharbor/harbor-portal:v1.9.4
docker tag goharbor/harbor-db:v1.9.4 master215.k8s:88/k8s/goharbor/harbor-db:v1.9.4
docker tag goharbor/prepare:v1.9.4 master215.k8s:88/k8s/goharbor/prepare:v1.9.4
docker tag k8s.gcr.io/etcd:3.4.3-0 master215.k8s:88/k8s/k8s.gcr.io/etcd:3.4.3-0



docker push master215.k8s:88/k8s/k8s.gcr.io/kube-scheduler:v1.18.2
docker push master215.k8s:88/k8s/k8s.gcr.io/kube-apiserver:v1.18.2
docker push master215.k8s:88/k8s/k8s.gcr.io/kube-controller-manager:v1.18.2
docker push master215.k8s:88/k8s/quay.io/coreos/flannel:v0.12.0-amd64
docker push master215.k8s:88/k8s/k8s.gcr.io/pause:3.2
docker push master215.k8s:88/k8s/k8s.gcr.io/coredns:1.6.7
docker push master215.k8s:88/k8s/goharbor/chartmuseum-photon:v0.9.0-v1.9.4
docker push master215.k8s:88/k8s/goharbor/harbor-migrator:v1.9.4
docker push master215.k8s:88/k8s/goharbor/redis-photon:v1.9.4
docker push master215.k8s:88/k8s/goharbor/clair-photon:v2.1.0-v1.9.4
docker push master215.k8s:88/k8s/goharbor/notary-server-photon:v0.6.1-v1.9.4
docker push master215.k8s:88/k8s/goharbor/notary-signer-photon:v0.6.1-v1.9.4
docker push master215.k8s:88/k8s/goharbor/harbor-registryctl:v1.9.4
docker push master215.k8s:88/k8s/goharbor/registry-photon:v2.7.1-patch-2819-2553-v1.9.4
docker push master215.k8s:88/k8s/goharbor/nginx-photon:v1.9.4
docker push master215.k8s:88/k8s/goharbor/harbor-log:v1.9.4
docker push master215.k8s:88/k8s/goharbor/harbor-jobservice:v1.9.4
docker push master215.k8s:88/k8s/goharbor/harbor-core:v1.9.4
docker push master215.k8s:88/k8s/goharbor/harbor-portal:v1.9.4
docker push master215.k8s:88/k8s/goharbor/harbor-db:v1.9.4
docker push master215.k8s:88/k8s/goharbor/prepare:v1.9.4
docker push master215.k8s:88/k8s/k8s.gcr.io/etcd:3.4.3-0

在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章