部署k8s ssl集羣實踐8:部署高可用 kube-controller-manager 集羣

參考文檔:
https://github.com/opsnull/follow-me-install-kubernetes-cluster
感謝作者的無私分享。
集羣環境已搭建成功跑起來。
文章是部署過程中遇到的錯誤和詳細操作步驟記錄。如有需要對比參考,請按照順序閱讀和測試。

本文檔介紹部署高可用 kube-controller-manager 集羣的步驟。
該集羣包含 3 個節點,啓動後將通過競爭選舉機制產生一個 leader 節點,其它節點爲阻塞狀態。當 leader 節點不可用後,剩餘節點將再次進行選舉產生新的 leader 節點,從而保證服務的可用性。

二進制文件,在前面部署api時,已經下載好且已經配置到指定位置了。

8.1
創建 kube-controller-manager 證書和私鑰
創建證書籤名請求:

[root@k8s-master controller-manager]# cat kube-controller-manager-csr.json
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"hosts": [
"127.0.0.1",
"192.168.1.92",
"192.168.1.93",
"192.168.1.95"
],
"names": [
{
"C": "CN",
"ST": "SZ",
"L": "SZ",
"O": "system:kube-controller-manager",
"OU": "4Paradigm"
}
]
}

hosts 列表包含所有 kube-controller-manager 節點 IP;
CN 爲 system:kube-controller-manager、O 爲 system:kube-controller-manager,
kubernetes 內置的 ClusterRoleBindings system:kube-controller-manager 賦予
kube-controller-manager 工作所需的權限。

8.2
生成證書和私鑰:

[root@k8s-master controller-manager]# cfssl gencert -ca=/etc/kubernetes/cert/ca.pem -ca-key=/etc/kubernetes/cert/ca-key.pem -config=/etc/kubernetes/cert/ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
2018/08/23 14:46:31 [INFO] generate received request
2018/08/23 14:46:31 [INFO] received CSR
2018/08/23 14:46:31 [INFO] generating key: rsa-2048
2018/08/23 14:46:31 [INFO] encoded CSR
2018/08/23 14:46:31 [INFO] signed certificate with serial number 247663923040327053326061896927456848563103952529
2018/08/23 14:46:31 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s-master controller-manager]#
[root@k8s-master controller-manager]# ls
kube-controller-manager.csr  kube-controller-manager-csr.json  kube-controller-manager-key.pem  kube-controller-manager.pem
[root@k8s-master controller-manager]#

8.3
將生成的證書和私鑰分發到所有 master 節點

[root@k8s-master controller-manager]# cp kube* /etc/kubernetes/cert/
[root@k8s-master controller-manager]# scp kube* root@k8s-node1:/etc/kubernetes/cert/
kube-controller-manager.csr                                                                     100% 1127     1.1MB/s   00:00   
kube-controller-manager-csr.json                                                                100%  266   472.7KB/s   00:00   
kube-controller-manager-key.pem                                                                 100% 1679     2.7MB/s   00:00   
kube-controller-manager.pem                                                                     100% 1489     2.5MB/s   00:00   
[root@k8s-master controller-manager]# scp kube* root@k8s-node2:/etc/kubernetes/cert/
kube-controller-manager.csr                                                                     100% 1127     1.3MB/s   00:00   
kube-controller-manager-csr.json                                                                100%  266   392.4KB/s   00:00   
kube-controller-manager-key.pem                                                                 100% 1679     2.5MB/s   00:00   
kube-controller-manager.pem                                                                     100% 1489     2.5MB/s   00:00   
[root@k8s-master controller-manager]#

8.4
修改文件屬主和加x權限

[root@k8s-master controller-manager]# chown -R k8s /etc/kubernetes/cert/
[root@k8s-master controller-manager]# chmod -R +x /etc/kubernetes/cert/
[root@k8s-master controller-manager]# ssh root@k8s-node1 "chown -R k8s /etc/kubernetes/cert/ && chmod -R +x /etc/kubernetes/cert/"
[root@k8s-master controller-manager]# ssh root@k8s-node2 "chown -R k8s /etc/kubernetes/cert/ && chmod -R +x /etc/kubernetes/cert/"

8.5
創建和分發 kubeconfig 文件
kubeconfig 文件包含訪問 apiserver 的所有信息,如 apiserver 地址、CA 證書和自身使
用的證書

[root@k8s-master controller-manager]# kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/cert/ca.pem --embed-certs=true --server=https://192.168.1.94:8443 --kubeconfig=kube-controller-manager.kubeconfig

[root@k8s-master controller-manager]# kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig

[root@k8s-master controller-manager]# kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

[root@k8s-master controller-manager]# kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

分發 kubeconfig 到所有 master 節點:

[root@k8s-master controller-manager]# cp kube-controller-manager.kubeconfig /etc/kubernetes/
[root@k8s-master controller-manager]# scp kube-controller-manager.kubeconfig root@k8s-node1:/etc/kubernetes/
kube-controller-manager.kubeconfig                                                              100% 6433     5.1MB/s   00:00   
[root@k8s-master controller-manager]# scp kube-controller-manager.kubeconfig root@k8s-node2:/etc/kubernetes/
kube-controller-manager.kubeconfig                                                              100% 6433     5.8MB/s   00:00   
[root@k8s-master controller-manager]#

設置下權限

[root@k8s-master controller-manager]#  chown -R k8s /etc/kubernetes
[root@k8s-master controller-manager]# chmod -R +x /etc/kubernetes
[root@k8s-master controller-manager]#  ssh root@k8s-node1 "chown -R k8s /etc/kubernetes/ && chmod -R +x /etc/kubernetes/"
[root@k8s-master controller-manager]#  ssh root@k8s-node2 "chown -R k8s /etc/kubernetes/ && chmod -R +x /etc/kubernetes/"
[root@k8s-master controller-manager]#

8.6
創建和分發 kube-controller-manager systemd unit 文件

[root@k8s-master controller-manager]# source /opt/k8s/bin/environment.sh
[root@k8s-master controller-manager]# cat kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/opt/k8s/bin/kube-controller-manager \
--port=0 \
--secure-port=10252 \
--bind-address=127.0.0.1 \
--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
--service-cluster-ip-range=10.254.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \
--experimental-cluster-signing-duration=8760h \
--root-ca-file=/etc/kubernetes/cert/ca.pem \
--service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem \
--leader-elect=true \
--feature-gates=RotateKubeletServerCertificate=true \
--controllers=*,bootstrapsigner,tokencleaner \
--horizontal-pod-autoscaler-use-rest-clients=true \
--horizontal-pod-autoscaler-sync-period=10s \
--tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem \
--tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem \
--use-service-account-credentials=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2
Restart=on
Restart=on-failure
RestartSec=5
User=k8s
[Install]
WantedBy=multi-user.target
[root@k8s-master controller-manager]#

--port=0 :關閉監聽 http /metrics 的請求,同時 --address 參數無效, --
bind-address 參數有效;
--secure-port=10252 、 --bind-address=0.0.0.0 : 在所有網絡接口監聽
10252 端口的 https /metrics 請求;
--kubeconfig :指定 kubeconfig 文件路徑,kube-controller-manager 使用它連
接和驗證 kube-apiserver;
--cluster-signing--file :簽名 TLS Bootstrap 創建的證書;
--experimental-cluster-signing-duration :指定 TLS Bootstrap 證書的有
效期;
--root-ca-file :放置到容器 ServiceAccount 中的 CA 證書,用來對 kubeapiserver
的證書進行校驗;
--service-account-private-key-file :簽名 ServiceAccount 中 Token 的私
鑰文件,必須和 kube-apiserver 的 --service-account-key-file 指定的公鑰
文件配對使用;
--service-cluster-ip-range :指定 Service Cluster IP 網段,必須和 kubeapiserver
中的同名參數一致;
--leader-elect=true :集羣運行模式,啓用選舉功能;被選爲 leader 的節點負
責處理工作,其它節點爲阻塞狀態;
--feature-gates=RotateKubeletServerCertificate=true :開啓 kublet
server 證書的自動更新特性;
--controllers=
,bootstrapsigner,tokencleaner :啓用的控制器列表,
tokencleaner 用於自動清理過期的 Bootstrap token;
--horizontal-pod-autoscaler-* :custom metrics 相關參數,支持
autoscaling/v2alpha1;
--tls-cert-file 、 --tls-private-key-file :使用 https 輸出 metrics 時使
用的 Server 證書和祕鑰;
--use-service-account-credentials=true :
User=k8s :使用 k8s 賬戶運行;
kube-controller-manager 不對請求 https metrics 的 Client 證書進行校驗,故不需要指定
--tls-ca-file 參數,而且該參數已被淘汰。

分發 systemd unit 文件到所有 master 節點:

[root@k8s-master controller-manager]# cp kube-controller-manager.service /etc/systemd/system
[root@k8s-master controller-manager]# scp kube-controller-manager.service root@k8s-node1:/etc/systemd/system
kube-controller-manager.service                                                                 100% 1231     1.6MB/s   00:00   
[root@k8s-master controller-manager]# scp kube-controller-manager.service root@k8s-node2:/etc/systemd/system
kube-controller-manager.service                                                                 100% 1231     1.8MB/s   00:00   
[root@k8s-master controller-manager]#

加上執行權限

[root@k8s-master controller-manager]# chmod +x /etc/systemd/system/kube-controller-manager.service
[root@k8s-master controller-manager]# ssh root@k8s-node1 "chmod +x /etc/systemd/system/kube-controller-manager.service"
[root@k8s-master controller-manager]# ssh root@k8s-node2 "chmod +x /etc/systemd/system/kube-controller-manager.service"
[root@k8s-master controller-manager]#

kube-controller-manager 的權限
ClusteRole: system:kube-controller-manager 的權限很小,只能創建 secret、
serviceaccount 等資源對象,各 controller 的權限分散到 ClusterRole
system:controller:XXX 中。
需要在 kube-controller-manager 的啓動參數中添加 --use-service-accountcredentials=
true 參數,這樣 main controller 會爲各 controller 創建對應的
ServiceAccount XXX-controller。
內置的 ClusterRoleBinding system:controller:XXX 將賦予各 XXX-controller
ServiceAccount 對應的 ClusterRole system:controller:XXX 權限。

啓動服務

[root@k8s-master controller-manager]# systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /etc/systemd/system/kube-controller-manager.service.
[root@k8s-master controller-manager]#

如果起不來。注意檢查文件的屬主是不是k8s並且有沒有執行權限

[root@k8s-master ~]# ll /etc/kubernetes/
總用量 16
drwxr-xr-x 2 k8s root 4096 8月  23 14:48 cert
-rwxr-xr-x 1 k8s root  240 8月  23 11:59 encryption-config.yaml
-rwx--x--x 1 k8s root 6433 8月  23 15:00 kube-controller-manager.kubeconfig
[root@k8s-master ~]#

kube-controller-manager 監聽 10252 端口,接收 https 請求:

[root@k8s-master ~]# netstat -lnpt|grep kube-controll
tcp        0      0 127.0.0.1:10252         0.0.0.0:*               LISTEN      11322/kube-controll
[root@k8s-master ~]#

curl --cacert CA 證書用來驗證 kube-controller-manager https server 證書;(##注意下面的顯示這個只有lead主機纔會出現,非lead主機不是顯示這個)

[root@k8s-master controller-manager]# curl -s --cacert /etc/kubernetes/cert/ca.pem https://127.0.0.1:10252/metrics |head
# HELP ClusterRoleAggregator_adds Total number of adds handled by workqueue: ClusterRoleAggregator
# TYPE ClusterRoleAggregator_adds counter
ClusterRoleAggregator_adds 3
# HELP ClusterRoleAggregator_depth Current depth of workqueue: ClusterRoleAggregator
# TYPE ClusterRoleAggregator_depth gauge
ClusterRoleAggregator_depth 0
# HELP ClusterRoleAggregator_queue_latency How long an item stays in workqueueClusterRoleAggregator before being requested.
# TYPE ClusterRoleAggregator_queue_latency summary
ClusterRoleAggregator_queue_latency{quantile="0.5"} 48393
ClusterRoleAggregator_queue_latency{quantile="0.9"} 48781
[root@k8s-master controller-manager]#
[root@k8s-master ~]# kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master_e2e3d783-a6a9-11e8-a796-000c29b6aeef","leaseDurationSeconds":15,"acquireTime":"2018-08-23T07:55:39Z","renewTime":"2018-08-23T08:12:31Z","leaderTransitions":1}'
  creationTimestamp: 2018-08-23T07:52:44Z
  name: kube-controller-manager
  namespace: kube-system
  resourceVersion: "5224"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
  uid: 84f4d257-a6a9-11e8-9cf4-000c29b6aeef
[root@k8s-master ~]#

測試下lead:
關閉k8s-master的  kube-controller-manager 服務

[root@k8s-master ~]# systemctl stop kube-controller-manager.service
[root@k8s-master ~]#

很快lead已經切換到k8s-node2

[root@k8s-node1 .kube]# kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-node2_a0927795-a6ab-11e8-958a-000c29b522c1","leaseDurationSeconds":15,"acquireTime":"2018-08-23T08:13:33Z","renewTime":"2018-08-23T08:15:22Z","leaderTransitions":2}'
  creationTimestamp: 2018-08-23T07:52:44Z
  name: kube-controller-manager
  namespace: kube-system
  resourceVersion: "5354"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
  uid: 84f4d257-a6a9-11e8-9cf4-000c29b6aeef
[root@k8s-node1 .kube]#

可能的報錯1:

[root@k8s-master ~]# kubectl get cs
NAME                 STATUS      MESSAGE                                                                                                                                  ERROR
scheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused                                               
controller-manager   Unhealthy   Get http://127.0.0.1:10252/healthz: net/http: HTTP/1.x transport connection broken: malformed HTTP response "\x15\x03\x01\x00\x02\x02"  
etcd-0               Healthy     {"health":"true"}                                                                                                                       
etcd-1               Healthy     {"health":"true"}                                                                                                                       
etcd-2               Healthy     {"health":"true"}                                                                                                                       
[root@k8s-master ~]#

[root@k8s-master cert]# systemctl status kube-controller-manager.service  -l
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/etc/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since 四 2018-08-23 16:21:18 CST; 27min ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 736 (kube-controller)
    Tasks: 6
   Memory: 60.6M
   CGroup: /system.slice/kube-controller-manager.service
           └─736 /opt/k8s/bin/kube-controller-manager --port=0 --secure-port=10252 --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig --service-cluster-ip-range=10.254.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem --cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem --experimental-cluster-signing-duration=8760h --root-ca-file=/etc/kubernetes/cert/ca.pem --service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --controllers=*,bootstrapsigner,tokencleaner --horizontal-pod-autoscaler-use-rest-clients=true --horizontal-pod-autoscaler-sync-period=10s --tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem --tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem --use-service-account-credentials=true --alsologtostderr=true --logtostderr=false --log-dir=/var/log/kubernetes --v=2

8月 23 16:21:22 k8s-master kube-controller-manager[736]: W0823 16:21:22.749164     736 authentication.go:55] Authentication is disabled
8月 23 16:21:22 k8s-master kube-controller-manager[736]: I0823 16:21:22.749470     736 serve.go:96] Serving securely on 127.0.0.1:10252
8月 23 16:21:22 k8s-master kube-controller-manager[736]: I0823 16:21:22.749611     736 leaderelection.go:185] attempting to acquire leader lease  kube-system/kube-controller-manager...
8月 23 16:21:26 k8s-master kube-controller-manager[736]: E0823 16:21:26.239889     736 leaderelection.go:234] error retrieving resource lock kube-system/kube-controller-manager: Get https://192.168.1.94:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: EOF
8月 23 16:21:39 k8s-master kube-controller-manager[736]: E0823 16:21:39.777294     736 leaderelection.go:234] error retrieving resource lock kube-system/kube-controller-manager: Get https://192.168.1.94:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: net/http: TLS handshake timeout
8月 23 16:21:46 k8s-master kube-controller-manager[736]: E0823 16:21:46.457565     736 leaderelection.go:234] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
8月 23 16:22:02 k8s-master kube-controller-manager[736]: I0823 16:22:02.728696     736 logs.go:49] http: TLS handshake error from 127.0.0.1:34834: tls: first record does not look like a TLS handshake
8月 23 16:27:31 k8s-master kube-controller-manager[736]: I0823 16:27:31.355078     736 logs.go:49] http: TLS handshake error from 127.0.0.1:35848: tls: first record does not look like a TLS handshake
8月 23 16:31:59 k8s-master kube-controller-manager[736]: I0823 16:31:59.739095     736 logs.go:49] http: TLS handshake error from 127.0.0.1:36666: tls: first record does not look like a TLS handshake
8月 23 16:33:39 k8s-master kube-controller-manager[736]: I0823 16:33:39.337035     736 logs.go:49] http: TLS handshake error from 127.0.0.1:36984: tls: first record does not look like a TLS handshake
[root@k8s-master cert]#

把讀取的

/etc/systemd/system/kube-controller-manager.service
#--port=0 \
#--secure-port=10252 \
#--bind-address=127.0.0.1 \
這三項禁止後,正常
解釋:
 --bind-address  默認是 0.0.0.0 
--port
--secure-port
這倆設置成0 就是不開啓https
 [root@k8s-master ~]# kubectl get cs
NAME                 STATUS      MESSAGE                                                                                     ERROR
scheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused  
controller-manager   Healthy     ok                                                                                         
etcd-0               Healthy     {"health":"true"}                                                                          
etcd-1               Healthy     {"health":"true"}                                                                          
etcd-2               Healthy     {"health":"true"}                                                                          
[root@k8s-master ~]#

可能的報錯2:

Aug 23 15:36:30 k8s-master kube-controller-manager: E0823 15:36:30.754900   11322 leaderelection.go:234] error retrieving resource lock kube-system/kube-controller-manager: Get http://192.168.1.94/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 192.168.1.94:80: connect: connection refused

kube-controller-manager讀取的config檢查下
感覺訪問的api不對。
排查:

[root@k8s-master ~]# cat /etc/kubernetes/kube-controller-manager.kubeconfig  |grep server
    server: 192.168.1.94
[root@k8s-master ~]#

[root@k8s-master ~]# echo ${KUBE_APISERVER}
https://192.168.1.94:8443
[root@k8s-master ~]#

修改

/etc/kubernetes/kube-controller-manager.kubeconfig
的server爲
https://192.168.1.94:8443

重新生成kube-controller-manager.kubeconfig文件即可

[root@k8s-master controller-manager]# ls
kube-controller-manager.csr       kube-controller-manager-key.pem     kube-controller-manager.pem
kube-controller-manager-csr.json  kube-controller-manager.kubeconfig  kube-controller-manager.service
[root@k8s-master controller-manager]# rm -rf kube-controller-manager.kubeconfig
[root@k8s-master controller-manager]# kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/cert/ca.pem --embed-certs=true --server=https://192.168.1.94:8443 --kubeconfig=kube-controller-manager.kubeconfig
Cluster "kubernetes" set.
[root@k8s-master controller-manager]# kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig
User "system:kube-controller-manager" set.
[root@k8s-master controller-manager]# kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
Context "system:kube-controller-manager" created.
[root@k8s-master controller-manager]# kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
Switched to context "system:kube-controller-manager".
[root@k8s-master controller-manager]# cp kube-controller-manager.kubeconfig /etc/kubernetes/
cp:是否覆蓋"/etc/kubernetes/kube-controller-manager.kubeconfig"? y
[root@k8s-master controller-manager]# chown -R k8s /etc/kubernetes
[root@k8s-master controller-manager]# chmod -R +x /etc/kubernetes
[root@k8s-master controller-manager]# scp kube-controller-manager.kubeconfig root@k8s-node1:/etc/kubernetes/
kube-controller-manager.kubeconfig                                                              100% 6446     6.8MB/s   00:00   
[root@k8s-master controller-manager]# scp kube-controller-manager.kubeconfig root@k8s-node2:/etc/kubernetes/
kube-controller-manager.kubeconfig                                                              100% 6446     6.1MB/s   00:00   
[root@k8s-master controller-manager]#  ssh root@k8s-node1 "chown -R k8s /etc/kubernetes/ && chmod -R +x /etc/kubernetes/"
[root@k8s-master controller-manager]# ssh root@k8s-node2 "chown -R k8s /etc/kubernetes/ && chmod -R +x /etc/kubernetes/"
[root@k8s-master controller-manager]#
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章