三 node節點部署k8s組件

接着第二篇,master上面部署完了三個角色,接着部署node節點
主要部署:kubelet kube-proxy

一 環境準備(以下都是在master上操作)

1建立目錄,拷貝兩個組件

mkdir /home/yx/kubernetes/{bin,cfg,ssl} -p
# 兩個node節點都拷貝
scp -r /home/yx/src/kubernetes/server/bin/kubelet [email protected]:/home/yx/kubernetes/bin
scp -r /home/yx/src/kubernetes/server/bin/kube-proxy [email protected]:/home/yx/kubernetes/bin

2將kubelet-bootstrap用戶綁定到系統集羣角色

kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap

3 生成bootstrap.kubeconfig和kube-proxy.kubeconfig兩個文件,利用kubeconfig.sh腳本,內如如下:

執行 bash kubeconfig.sh 192.168.18.104 其中第一個參數是master節點ip,第二個是ssl證書的路徑,最終會生成上面兩個文件,然後把這兩個文件拷貝到兩個node節點上面

# 創建 TLS Bootstrapping Token
#BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
BOOTSTRAP_TOKEN=71b6d986c47254bb0e63b2a20cfaf560

cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

#----------------------

APISERVER=$1
SSL_DIR=$2

# 創建kubelet bootstrapping kubeconfig 
export KUBE_APISERVER="https://$APISERVER:6443"

# 設置集羣參數
kubectl config set-cluster kubernetes \
  --certificate-authority=$SSL_DIR/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

# 設置客戶端認證參數
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig

# 設置上下文參數
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

# 設置默認上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

#----------------------

# 創建kube-proxy kubeconfig文件

kubectl config set-cluster kubernetes \
  --certificate-authority=$SSL_DIR/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
  --client-certificate=$SSL_DIR/kube-proxy.pem \
  --client-key=$SSL_DIR/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

4拷貝生成的bootstrap.kubeconfig 和 kube-proxy.kubeconfig

scp bootstrap.kubeconfig kube-proxy.kubeconfig [email protected]:/home/yx/kubernetes/cfg
scp bootstrap.kubeconfig kube-proxy.kubeconfig [email protected]:/home/yx/kubernetes/cfg

二 node節點安裝

1 部署kubelet組件

創建kubelet配置文件:

 cat /home/yx/kubernetes/cfg/kubelet

KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.18.105 \
--kubeconfig=/home/yx/kubernetes/cfg/kubelet.kubeconfig \
--experimental-bootstrap-kubeconfig=/home/yx/kubernetes/cfg/bootstrap.kubeconfig \
--config=/home/yx/kubernetes/cfg/kubelet.config \
--cert-dir=/home/yx/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
參數說明:
--hostname-override 在集羣中顯示的主機名
--kubeconfig 指定kubeconfig文件位置,會自動生成
--bootstrap-kubeconfig 指定剛纔生成的bootstrap.kubeconfig文件
--cert-dir 頒發證書存放位置
--pod-infra-container-image 管理Pod網絡的鏡像

創建kubelet.config

 cat /home/yx/kubernetes/cfg/kubelet.config 

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.18.105
port: 10250
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2 
clusterDomain: cluster.local.
failSwapOn: false

啓動腳本

 cat /usr/lib/systemd/system/kubelet.service 
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/home/yx/kubernetes/cfg/kubelet
ExecStart=/home/yx/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target

啓動

 systemctl daemon-reload
 systemctl enable kubelet
 systemctl restart kubelet

查看是否啓動

三 node節點部署k8s組件

2 部署kube-proxy組件

創建kube-proxy配置文件:

 cat /home/yx/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.18.105 \
--cluster-cidr=10.0.0.0/24 \
--proxy-mode=ipvs \
--kubeconfig=/home/yx/kubernetes/cfg/kube-proxy.kubeconfig"

啓動腳本

[yx@tidb-tikv-02 cfg]$ cat /usr/lib/systemd/system/kube-proxy.service 
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/home/yx/kubernetes/cfg/kube-proxy
ExecStart=/home/yx/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.targe

啓動

 systemctl daemon-reload
systemctl enable kube-proxy
 systemctl restart kube-proxy

驗證
三 node節點部署k8s組件

同樣的,在另一個node節點上也執行上面的,注意ip要改一下即可

三 在Master審批Node加入集羣:

查看

[yx@tidb-tidb-03 cfg]$ kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-jn-F4xSn1LAwJhom9l7hlW0XuhDQzo-RQrnkz1j4q6Y 16m kubelet-bootstrap Pending
node-csr-kB2CFmTqkCA2Ix5qYGSXoAP3-ctes-cHcjs7D84Wb38 5h55m kubelet-bootstrap Approved,Issued
node-csr-wWa0cKQ6Ap9Bcqap3m9d9ZBqBclwkLB84W8bpB3g_m0 22s kubelet-bootstrap Pending

允許加入

kubectl certificate approve node-csr-wWa0cKQ6Ap9Bcqap3m9d9ZBqBclwkLB84W8bpB3g_m0
certificatesigningrequest.certificates.k8s.io/node-csr-wWa0cKQ6Ap9Bcqap3m9d9ZBqBclwkLB84W8bpB3g_m0 approved
# 允許完成之後,狀態會發生改變由Pending變成Approved,Issued

四 查看集羣狀態(master上)

[yx@tidb-tidb-03 cfg]$ kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.18.104 Ready <none> 41s v1.12.1
192.168.18.105 Ready <none> 52s v1.12.1

[yx@tidb-tidb-03 cfg]$ kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok                   
controller-manager Healthy ok                   
etcd-1 Healthy {"health": "true"}   
etcd-2 Healthy {"health": "true"}   
etcd-0 Healthy {"health": "true"} 

至此整個k8s二進制安裝全部完成,接下來該進行實際操作了

五 創建一個實例測試

創建一個Nginx Web,測試集羣是否正常工作:

 kubectl run nginx --image=nginx --replicas=3  #創建三個
 kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort # 映射端口80映射到88

查看pod和services

[yx@tidb-tidb-03 cfg]$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 3d19h
nginx NodePort 10.0.0.154 <none> 88:40997/TCP 19s

[yx@tidb-tidb-03 cfg]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-dbddb74b8-887vr 1/1 Running 0 48s
nginx-dbddb74b8-h7hrp 1/1 Running 0 48s
nginx-dbddb74b8-wnf2m 1/1 Running 0 48s

最終在瀏覽器裏面訪問兩個node節點的ip+40997看是否能正常出現nginx訪問頁面
三 node節點部署k8s組件

六 查看pod的訪問日誌

kubectl logs pod的名字

[root@tikv-1 shell]# kubectl logs nginx-dbddb74b8-ft88w 
error: You must be logged in to the server (the server has asked for the client to provide credentials ( pods/log nginx-dbddb74b8-ft88w))
# 如果查看日誌報這個錯誤

解決辦法:

在所有的node節點上面更改/opt/kubernetes/cfg 目錄下面的kubelet.config,末尾
加上:

authentication:
  anonymous:
    enabled: true

#所有內容如下:
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.18.104
port: 10250
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2 
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true

然後重啓systemctl restart kubelet,然後再次查看日誌,發現還是報錯

[root@tikv-1 shell]# kubectl logs nginx-dbddb74b8-ft88w 
Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-dbddb74b8-ft88w)

解決辦法:

在master上面綁定一個角色

 kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
提示如下:
clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created
相當於把一個普通用戶綁定到了管理員用戶

最後再次查看日誌,發現成功了

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章