centos7 k8s 1master+2node 集羣配置
文章目錄
0 環境
軟件 | 版本 |
---|---|
centos | CentOS Linux release 7.8.2003 |
docker | Docker version 19.03.8, build afacb8b |
k8s | 1.18.2 |
ip | 角色 | hostname |
---|---|---|
192.168.6.101 | master | master01 |
192.168.6.102 | node | node01 |
192.168.6.103 | node | node02 |
1 centos7 優化(所有節點)
1.1 改阿里源
如果安裝[2.2 安裝docker-ce]有問題,則此步驟去掉
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum update -y
yum clean all
yum makecache
1.2 關閉防火牆
firewall-cmd --state #查看防火牆狀態
systemctl stop firewalld.service #停止firewall
systemctl disable firewalld.service #禁止firewall開機啓動
1.3 關閉selinux
getenforce #查看selinux狀態
vi /etc/selinux/config
將SELINUX=enforcing改爲SELINUX=disabled
1.4 安裝命令補全bash-completion
yum install -y bash-completion
2 安裝docker(所有節點)
2.1 刪除原有的docker軟件
yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
2.2 安裝docker-ce
yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce docker-ce-cli containerd.io
systemctl start docker #啓動docker
systemctl enable docker #設置爲系統服務
docker -v # 驗證docker是否安裝成功
2.3 配置阿里鏡像實現docker下載加速
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://[自己配置的docker加速].mirror.aliyuncs.com","https://registry.docker-cn.com"]
}
EOF
重啓生效
如何配置阿里鏡像加速,請參考傳送門
2.4 免sudo運行docker命令(可選)
sudo groupadd docker #添加用戶組(如果沒有docker組的話)
sudo gpasswd -a ${USER} docker #把當前用戶添加到docker用戶組
sudo systemctl daemon-reload #重載配置文件
sudo systemctl restart docker #重啓docker服務
3 K8S羣集
3.1 配置hosts、hostname(所有節點)
修改/etc/hosts文件
199.232.68.133 raw.githubusercontent.com #非必須,如果無法從github下載文件,可以配置次行試試
192.168.6.101 master01
192.168.6.102 node01
192.168.6.103 node02
對應主機修改/etc/hostname文件
3.2 禁用swap(所有節點)
修改配置文件/etc/fstab
vi /etc/fstab
註釋下面這一行
/mnt/swap swap swap defaults 0 0
修改/etc/sysctl.conf文件下vm.swappiness參數
echo vm.swappiness=0 >> /etc/sysctl.conf
使用free -m
驗證是否關閉成功
3.3 內核參數修改(所有節點)
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf
3.4 安裝Kubernetes源(所有節點)
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
更新緩存
yum clean all
yum -y makecache
安裝kubelet、kubeadm、kubectl
yum install -y kubelet kubeadm kubectl
安裝結果
Installed:
kubeadm.x86_64 0:1.18.2-0 kubectl.x86_64 0:1.18.2-0 kubelet.x86_64 0:1.18.2-0
Dependency Installed:
conntrack-tools.x86_64 0:1.4.4-7.el7
cri-tools.x86_64 0:1.13.0-0
kubernetes-cni.x86_64 0:0.7.5-0
libnetfilter_cthelper.x86_64 0:1.0.0-11.el7
libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7
libnetfilter_queue.x86_64 0:1.0.2-2.el7_2
socat.x86_64 0:1.7.3.2-2.el7
啓動並將kubelet配置爲系統服務
systemctl enable kubelet && systemctl start kubelet
3.5 master配置
生成默認配置文件
cd ~
mkdir k8s
cd k8s
kubeadm config print init-defaults ClusterConfiguration > kubeadm.conf
修改kubeadm.conf配置文件
mageRepository修改爲
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
advertiseAddress修改爲master01的ip
localAPIEndpoint:
advertiseAddress: 192.168.6.101
修改k8s的實際版本號
kubernetesVersion: v1.18.2
配置podSubnet
podSubnet: 10.244.0.0/16
根據kubeadm.conf配置文件下載鏡像
kubeadm config images pull --config ./kubeadm.conf
根據kubeadm.conf初始化環境
sudo kubeadm init --config ./kubeadm.conf
下載成功如有如下提示,要記錄好kubeadm join 192.168.6.101:6443 ...
這段命令,這段命令用於配置node環境
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.6.101:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:d60829461dcbfd149c00b41e2baa3c500d9a459840578faf2c0a6e33635fc9fd
創建配置文件並賦予權限,重啓kubelet服務
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
sudo systemctl enable kubelet
sudo systemctl restart kubelet
配置flannel覆蓋網絡工具
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
驗證master01是否配置成功
kubectl get nodes
配置成功會如下圖所示,status爲Ready
將master01上的配置文件通過scp拷貝到node01、node02上
sudo scp /etc/kubernetes/admin.conf lovewinner@node01:/home/lovewinner
sudo scp /home/lovewinner/k8s/kube-flannel.yml lovewinner@node01:/home/lovewinner
sudo scp /etc/kubernetes/admin.conf lovewinner@node02:/home/lovewinner
sudo scp /home/lovewinner/k8s/kube-flannel.yml lovewinner@node02:/home/lovewinner
3.6 node配置(node01、node02配置)
創建配置文件並賦予權限
mkdir -p $HOME/.kube
sudo cp -i $HOME/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
加入到集羣中
sudo kubeadm join 192.168.6.101:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:d60829461dcbfd149c00b41e2baa3c500d9a459840578faf2c0a6e33635fc9fd
配置flannel覆蓋網絡工具
cd /home/lovewinner
kubectl apply -f kube-flannel.yml
3.7 安裝控制面板(master01配置)
下載配置文件
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
修改配置文件
spec:
type: NodePort #添加此屬性
ports:
- port: 443
targetPort: 8443
nodePort: 30443 #添加此屬性
更新配置
kubectl apply -f recommended.yaml
瀏覽器訪問https://master01:30443 ,登錄dashboard
創建dashboard-adminuser.yaml,並複製下面內容
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
創建登錄用戶
kubectl apply -f dashboard-adminuser.yaml
查看token
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
執行完以後如下所示
Name: admin-user-token-6hw6v
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: 5b5af84f-0266-47f3-9b9a-5d51117be850
Type: kubernetes.io/service-account-token
Data
====
token: eyJhbGciOiJSUzI1NiIsImtpZCI6Inp5NmRITFR4SjNGaWN2b1pQd2RyRnlqcjZDSmJOZmV3VnhCcFhYS2RSeWsifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTZodzZ2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI1YjVhZjg0Zi0wMjY2LTQ3ZjMtOWI5YS01ZDUxMTE3YmU4NTAiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.v2-KCiTnQcK60Sw_Ey_0p3Fvln8U1HbzlVV9JBcPu3eXXzpMliRPDQEJJ9V0wewyZk5zo6pub2g7Gv8xla2H3krosnAfucVnPKpRgyxVtKdhdst2SdQr0TZZ0tTd-wGq0Gjoti1UQcZsvQaNeE6NALrDdeEcuziMGOZVnW1qpwx8sBceK4h0GVas3LE7j1vQPBH3w_qaNA_JY_NeQNZe0UCZXCNkBjtTnaWfQh2lXcal5UuzUWqsEcd42t9qh63GyX04wwG85jJbP0zerwpB0M8LN3axzM-hojxvOwYSvUbR7Ws4igKYDbdFez6_oViqE3SIvQJU1Xs-twOm9zzQ9A
ca.crt: 1025 bytes
namespace: 20 bytes