目錄
3.5 將橋接的IPv4流量傳遞到iptables的鏈(所有節點)
5 所有節點安裝Docker/kubeadm/kubelet
5.2 安裝kubeadm,kubelet和kubectl(所有節點)
6.4 加入Kubernetes Node slave上執行
1 機器規劃
機器 | ip | 角色 |
wyl01 | 192.168.190.130 | master |
wyl02 | 192.168.190.131 | node |
wyl03 | 192.168.190.132 | node |
2 安裝要求
在開始之前,部署Kubernetes集羣機器需要滿足以下幾個條件:
- 硬件配置:2GB或更多RAM,2個CPU或更多CPU,硬盤30GB或更多
- 集羣中所有機器之間網絡互通
- 可以訪問外網,需要拉取鏡像
- 禁止swap分區
3 準備環境
3.1 關閉防火牆(所有節點)
systemctl stop firewalld
systemctl disable firewalld
3.2 關閉selinux(所有節點)
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
3.3 關閉swap(所有節點)
swapoff -a # 臨時關閉
vim /etc/fstab # 刪除關於swap的那一行配置,永久關閉
3.4 添加主機名與IP對應關係(所有節點)
vim /etc/hosts
192.168.190.130 wyl01
192.168.190.131 wyl02
192.168.190.132 wyl03
3.5 將橋接的IPv4流量傳遞到iptables的鏈(所有節點)
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# 生效
sysctl --system
4 安裝Docker(所有節點)
# 獲取yum源地址
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
# 安裝docker
yum -y install docker-ce-18.06.1.ce-3.el7
# 啓動docker
systemctl enable docker && systemctl start docker
# 查看版本
docker --version
5 所有節點安裝Docker/kubeadm/kubelet
5.1 添加阿里雲YUM軟件源(所有節點)
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
5.2 安裝kubeadm,kubelet和kubectl(所有節點)
# 15版本
yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0
$ systemctl enable kubelet
# 16版本
yum install -y kubelet-1.16.0 kubeadm-1.16.0 kubectl-1.16.0
$ systemctl enable kubelet
6 部署Kubernetes Master
6.1 初始化
在(Master)上執行。apiserver-advertise是內網地址(如果有多個網卡的情況下)
kubeadm init \
--apiserver-advertise-address=192.168.190.130 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.15.0 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16
執行報錯:
invalid or incomplete external CA: failure loading certificate for API server: failed to load certificate: couldn't load the certificate file /etc/kubernetes/pki/apiserver.crt: open /etc/kubernetes/pki/apiserver.crt: no such file or directory
原因:之前這個機器部署過,所以要清空信息
kubeadm reset
由於默認拉取鏡像地址k8s.gcr.io國內無法訪問,這裏指定阿里雲鏡像倉庫地址。
6.2 創建目錄,並授權
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=$HOME/.kube/config
6.3 安裝Pod網絡插件(master)
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
確保能夠訪問到quay.io這個registery,如果下載失敗,可以改成這個鏡像地址:lizhenliang/flannel:v0.11.0-amd64
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
6.4 加入Kubernetes Node slave上執行
向集羣添加新節點,執行在kubeadm init輸出的kubeadm join命令:
查看節點
7 部署 Dashboard
7.1 安裝dashboard
wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
默認Dashboard只能集羣內部訪問,修改Service爲NodePort類型,暴露到外部,修改如下:
vim kubernetes-dashboard.yaml
# image改成國內的
image: lizhenliang/kubernetes-dashboard-amd64:v1.10.1
# 修改部分內容
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort # 修改後
ports:
- port: 443
targetPort: 8443
nodePort: 30001 # 修改後
selector:
k8s-app: kubernetes-dashboard
# 創建dashboard
kubectl apply -f kubernetes-dashboard.yaml
7.2 訪問地址
瀏覽器輸入:http://NodeIP:30001
創建service account並綁定默認cluster-admin管理員集羣角色(master):
[root@wyl01 ~]# kubectl create serviceaccount dashboard-admin -n kube-system
[root@wyl01 ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
[root@wyl01 ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name: dashboard-admin-token-bkf7p
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-admin
kubernetes.io/service-account.uid: 756edadb-1305-4603-b0e2-4e9fc7dcb97f
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tYmtmN3AiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNzU2ZWRhZGItMTMwNS00NjAzLWIwZTItNGU5ZmM3ZGNiOTdmIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.VYp4STidEfZGvCP1-T7QjB2iiAXxgROLVj-CD2Hey8WYT5_S7u2hGp1Nu_zCj2tRImrgEPNnoV1j_9qHLPYc0txk2lyYp5zJ2xaKVavIVIxFU4IWsFbf7hiZORzvIAVBfDN43BeUgUNgjf4ObEtx2Z8eYnfBY5vrPtDEhsTVyoH-Qms8ES_SoeXJQ6tf9PcF6P-4OVfQ7-JB-1DQdgS5IPkl2qm_Q4ZwFvrBhB3XYBtB9A2DMCZHPrTLwBX0Tqg4UF2fQpH_bOT9og0811n7VbMcu-FOP4Ky8UcN9BmuxmgZLEEYmZTkC_-HaZNuY7f0WNOFXibEnkPpUOU2-2uzdg
[root@wyl01 ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name: dashboard-admin-token-bkf7p
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-admin
kubernetes.io/service-account.uid: 756edadb-1305-4603-b0e2-4e9fc7dcb97f
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tYmtmN3AiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNzU2ZWRhZGItMTMwNS00NjAzLWIwZTItNGU5ZmM3ZGNiOTdmIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.VYp4STidEfZGvCP1-T7QjB2iiAXxgROLVj-CD2Hey8WYT5_S7u2hGp1Nu_zCj2tRImrgEPNnoV1j_9qHLPYc0txk2lyYp5zJ2xaKVavIVIxFU4IWsFbf7hiZORzvIAVBfDN43BeUgUNgjf4ObEtx2Z8eYnfBY5vrPtDEhsTVyoH-Qms8ES_SoeXJQ6tf9PcF6P-4OVfQ7-JB-1DQdgS5IPkl2qm_Q4ZwFvrBhB3XYBtB9A2DMCZHPrTLwBX0Tqg4UF2fQpH_bOT9og0811n7VbMcu-FOP4Ky8UcN9BmuxmgZLEEYmZTkC_-HaZNuY7f0WNOFXibEnkPpUOU2-2uzdg
7.3 登陸
將token的值,複製粘貼到上面圖中的令牌密鑰處。登陸
8.FAQ
8.1 k8s 機器重複部署pod網絡不通
背景:機器之前部署過k8s集羣,reset後,重新部署後,發現pod網絡和主機網絡之間不通。
解決方案:刪除之前部署後生成的網絡