Kubernetes實錄(4) 使用kubeadm配置3節點kubernets 1.12.0集羣

Kubernetes實錄系列記錄文檔完整目錄參考: Kubernetes實錄-目錄

相關記錄鏈接地址 :

本篇記錄在幾乎沒有接觸過K8S系統的情況下(docker接觸並使用過),查詢官方文檔以及網絡博客,使用kubeadm部署一個3節點的Kubernetes集羣,其中:

  1. 1個master節點,2個node節點;
  2. kubernetes使用1.12.0版本(當前最新版本),Docker使用docker-ce-18.06.1版本。
    docker使用18.06版本而不使用更高版本的原因是k8s官方文檔說該版本與k8s 1.12.0版本的集成是經過系統測試的,而更高版本則沒有可能存在兼容性問題需要主要自己解決,初次使用爲了不給自己找麻煩就使用這個版本了。

記錄遵循實際操作順序進行編輯:處理操作系統–> 處理kubernetes基本組件安裝 --> 初始化 --> 安裝插件 --> 驗證集羣訪問

主機名稱 ip地址 操作系統 角色 軟件版本 備註
master 10.120.67.25 CentOS 7.5 master k8s 1.12.0 docker-ce 18.06
node1 10.120.67.26 CentOS 7.5 node k8s 1.12.0 docker-ce 18.06
node2 10.120.67.27 CentOS 7.5 node k8s 1.12.0 docker-ce 18.06

網絡插件採用calico,部署在master節點。

一、更新並初始操作系統

1. 關閉防火牆[所有節點執行]

systemctl stop firewalld.service
systemctl disable firewalld.service

2. 禁用selinux[所有節點執行]

sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
setenforce 0

3. 關閉系統交換空間swap[所有節點執行]

swap不關閉kubelet可能會出錯,如果不關閉也可以修改kubelet的配置

# 我使用的是KVM虛擬機,本身就沒有使用swap空間,這裏記錄下設置方式[來自網絡文章]
swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab

4. 配置內核參數(網絡相關)[所有節點執行]

# 我使用的虛擬機模板配置了很多內核參數,這裏就不列出來了,只關心相關的。
yum install bridge-utils -y
modprobe br_netfilter

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

# 使配置生效
sysctl --system

# 設置ulimit,根據需要設置,這裏驗證環境如此設置足夠了。
cat <<EOF > /etc/security/limits.d/90-nproc.conf
*          soft    nproc     20480
*          hard    nproc     20480
*          soft    nofile    102400
*          hard    nofile    102400
root       soft    nproc     unlimited
EOF

5. 設置主機名稱並配置hosts文件[所有節點執行]

# 分別在各自節點執行
hostnamectl set-hostname master
hostnamectl set-hostname node1
hostnamectl set-hostname node2

# 所有節點執行,添加解析記錄(這裏使用cat命令展示設置後的內容)
# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.120.67.25  master
10.120.67.26  node1
10.120.67.27  node2

6. 配置時鐘服務器[所有節點執行]

master網絡時鐘指向公網默認時鐘源,其他node節點統一使用master作爲時鐘源

# master 配置
	vi /etc/chrony.conf 
	...
	allow 10.120.67.0/24
	
	# 啓動chronyd 
	systemctl enable chronyd.service
	systemctl start chronyd.service

# 各個node節點配置
	vi /etc/chrony.conf
	...
	server master iburst  # 將默認的使用master替換掉[註釋不包含在配置文件中]
	
	# 啓動服務
	systemctl enable chronyd.service
	systemctl start chronyd.service

7. 系統更新並重啓[所有節點執行]

yum -y install epel-release
yum -y update
# 更新完成後重啓操作系統
reboot

二、安裝docker-ce

1. 安裝依賴[所有節點執行]

# 我使用的虛擬機模版默認已經配置了很多系統工具,這裏是從文檔查閱需要的。
yum -y install yum-utils device-mapper-persistent-data lvm2 conntrack-tools bridge-utils ipvsadm

2. 添加docker軟件源(使用國內)[所有節點執行]

yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# 查看docker-ce版本列表
yum list docker-ce --showduplicates|sort -r
	docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable     # 使用這個版本
	docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable
	docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable
	docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable
	docker-ce.x86_64 17.12.1.ce-1.el7.centos docker-ce-stable
	docker-ce.x86_64 17.12.0.ce-1.el7.centos docker-ce-stable
	docker-ce.x86_64 17.09.1.ce-1.el7.centos docker-ce-stable
	docker-ce.x86_64 17.09.0.ce-1.el7.centos docker-ce-stable
	docker-ce.x86_64 17.06.2.ce-1.el7.centos docker-ce-stable
	docker-ce.x86_64 17.06.1.ce-1.el7.centos docker-ce-stable
	docker-ce.x86_64 17.06.0.ce-1.el7.centos docker-ce-stable
	docker-ce.x86_64 17.03.3.ce-1.el7 docker-ce-stable         
	docker-ce.x86_64 17.03.2.ce-1.el7.centos docker-ce-stable
	docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable
	docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable

3. 安裝指定版本docker-ce[18.06.1] [所有節點執行]

yum -y install docker-ce-18.06.1.ce

4. 安裝yum插件,用來固定docker版本[所有節點執行]

yum -y install yum-plugin-versionlock
yum versionlock docker-ce
yum versionlock list
    0:docker-ce-18.03.1.ce-1.el7.centos.*
    versionlock list done

5. 配置docker自啓動[所有節點執行]

systemctl enable docker.service
systemctl start docker.service

三、安裝配置kubernets組件

1. 配置kubernets 軟件源[所有節點執行]

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum clean all && yum makecache

2. 安裝kubeadm/kubelet/kubectl[所有節點執行]

yum -y install kubelet kubeadm kubectl

3. 啓動kubelet [所有節點執行]

systemctl enable kubelet.service
systemctl start kubelet.service
# 備註,此時啓動kubelet是存在問題,隨後的配置更新會恢復
systemctl status kubelet.service

4. 處理有關k8s的image拉取[所有節點]

k8s的image是在google的一個下級域名服務上,如果是在國外或者已經配置科學上網[實際在配置了代理後還是有些問題],可以正常訪問google的情況下,這一步是不需要的,直接進行下一步即可,否則報錯卡着無法進行了。錯誤信息截取如下

[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[preflight] Some fatal errors occurred:
	[ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-apiserver:v1.12.0]: exit status 1
	[ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-controller-manager:v1.12.0]: exit status 1
	[ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-scheduler:v1.12.0]: exit status 1
	[ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-proxy:v1.12.0]: exit status 1
	[ERROR ImagePull]: failed to pull image [k8s.gcr.io/pause:3.1]: exit status 1
	[ERROR ImagePull]: failed to pull image [k8s.gcr.io/etcd:3.2.24]: exit status 1
	[ERROR ImagePull]: failed to pull image [k8s.gcr.io/coredns:1.2.2]: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

該步驟用於從k8s相關mirrors源拉取k8s相關組件的images,也可以通過自己搭建私有源實現。參考

# 第一步:拉取image
docker pull mirrorgooglecontainers/kube-apiserver:v1.12.0
docker pull mirrorgooglecontainers/kube-controller-manager:v1.12.0
docker pull mirrorgooglecontainers/kube-scheduler:v1.12.0
docker pull mirrorgooglecontainers/kube-proxy:v1.12.0
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.2.24
docker pull coredns/coredns:1.2.2

# 第二步:修改tag,版本信息根據實際情況修改
docker tag docker.io/mirrorgooglecontainers/kube-proxy:v1.12.0 k8s.gcr.io/kube-proxy:v1.12.0
docker tag docker.io/mirrorgooglecontainers/kube-scheduler:v1.12.0 k8s.gcr.io/kube-scheduler:v1.12.0
docker tag docker.io/mirrorgooglecontainers/kube-apiserver:v1.12.0 k8s.gcr.io/kube-apiserver:v1.12.0
docker tag docker.io/mirrorgooglecontainers/kube-controller-manager:v1.12.0 k8s.gcr.io/kube-controller-manager:v1.12.0
docker tag docker.io/mirrorgooglecontainers/etcd:3.2.24  k8s.gcr.io/etcd:3.2.24
docker tag docker.io/mirrorgooglecontainers/pause:3.1  k8s.gcr.io/pause:3.1
docker tag docker.io/coredns/coredns:1.2.2  k8s.gcr.io/coredns:1.2.2

# 第三步:docker rmi刪除不用鏡像
docker rmi docker.io/mirrorgooglecontainers/kube-proxy:v1.12.0
docker rmi docker.io/mirrorgooglecontainers/kube-scheduler:v1.12.0
docker rmi docker.io/mirrorgooglecontainers/kube-apiserver:v1.12.0
docker rmi docker.io/mirrorgooglecontainers/kube-controller-manager:v1.12.0
docker rmi docker.io/mirrorgooglecontainers/etcd:3.2.24
docker rmi docker.io/mirrorgooglecontainers/pause:3.1
docker rmi docker.io/coredns/coredns:1.2.2

5. 使用kubeadm初始化master [master節點執行]

關鍵返回信息:Your Kubernetes master has initialized successfully!

# 指定kubernetes版本,並設置一下pod-network-cidr[該設置在下面的網絡插件中用到]:
kubeadm init --kubernetes-version=v1.12.0 --pod-network-cidr=192.168.0.0/16

[init] using Kubernetes version: v1.12.0
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [10.120.67.25 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [master localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.120.67.25]
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 20.502782 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node master as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation
[bootstraptoken] using token: ainpz2.3ctx0qrsr1gj78tw
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 10.120.67.25:6443 --token ainpz2.3ctx0qrsr1gj78tw --discovery-token-ca-cert-hash sha256:e2566c3261a35872ceb1a2e4866d90c9c988cbf8f842406e043a7325b2a19657

7. 設置config[master節點執行]

我是使用root賬戶配置的,另外我還有一個admin賬戶(應用發佈用的普通賬戶),我在這2個賬戶下設置config(實際我都是使用root測試的)

# 切換到相應的賬戶下,進行操作
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

8. 配置網絡插件Calico[master節點執行]

# 配置網絡插件之前,系統狀態
kubectl get pod -n kube-system
	NAME                             READY     STATUS    RESTARTS   AGE
	coredns-78fcdf6894-dxnj2         0/1       Pending   0          14m
	coredns-78fcdf6894-tbzx2         0/1       Pending   0          14m
	etcd-master                      1/1       Running   1          4m
	kube-apiserver-master            1/1       Running   1          4m
	kube-controller-manager-master   1/1       Running   1          4m
	kube-proxy-jhrw4                 1/1       Running   1          14m
	kube-scheduler-master            1/1       Running   1          4m

#如果狀態出現Pending、NodeLost是什麼原因,如何解決

# 安裝calico,中間的版本號可以根據需要替換(也可以直接下載下來本地執行)
#可以先將鏡像拉取
docker pull quay.io/calico/typha:v0.7.4
docker pull quay.io/calico/node:v3.1.3
docker pull quay.io/calico/cni:v3.1.3

kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

kubectl get pods --all-namespaces
	NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
	kube-system   calico-node-txj2z                1/2     Running   0          4s
	kube-system   coredns-576cbf47c7-m97nz         0/1     Pending   0          55s
	kube-system   coredns-576cbf47c7-vg289         0/1     Pending   0          55s
	kube-system   etcd-master                      0/1     Pending   0          2s
	kube-system   kube-controller-manager-master   1/1     Running   0          9s
	kube-system   kube-proxy-7qbzl                 1/1     Running   0          55s
	kube-system   kube-scheduler-master            0/1     Pending   0          4s

#測試健康情況
kubectl get cs
	NAME                 STATUS    MESSAGE              ERROR
	controller-manager   Healthy   ok                   
	scheduler            Healthy   ok                   
	etcd-0               Healthy   {"health": "true"} 

9. 將node加入集羣

  1. 從master將kubelet文件分別複製到node1、node1

    該步驟來自網絡文檔,實際上不需要。默認這個文件內容參數爲空值,也就是默認自動探測。

    	# [master節點執行]
        scp /etc/sysconfig/kubelet node1:/etc/sysconfig/kubelet
        scp /etc/sysconfig/kubelet node2:/etc/sysconfig/kubelet
    
  2. 執行 kubeadm join的命令即可【在需要加入集羣的node節點執行】

    	kubeadm join 10.120.67.25:6443 --token esj9lg.xcowk6gklt3ihjbl --discovery-token-ca-cert-hash sha256:22ec8f04f46472556ebb5639b1d4399a1f8e612cf5bce899c56da9a28d659423
    

    備註:如果後期需要添加新的節點,而kubeadm join的參數忘記,可以通過如下命令獲取:
    kubeadm token create --print-join-command

  3. 測試節點信息【master節點執行】

    kubectl get nodes
    	# 備註:從NotReady變成Ready狀態的時間可能有點長,請等待下
    	NAME      STATUS     ROLES     AGE       VERSION
    	master    Ready      master    20m       v1.11.3
    	node1     Ready      <none>    50s       v1.11.3
    	node2     Ready         <none>    40s       v1.11.3
    	
    kubectl get pod -n kube-system -o wide
    	NAME                             READY   STATUS    RESTARTS   AGE     IP             NODE     NOMINATED NODE
    	calico-node-ljqcv                2/2     Running   0          2m11s   10.120.67.27   node2    <none>
    	calico-node-lktsv                2/2     Running   0          2m38s   10.120.67.26   node1    <none>
    	calico-node-txj2z                2/2     Running   0          29m     10.120.67.25   master   <none>
    	coredns-576cbf47c7-m97nz         1/1     Running   0          30m     192.168.0.2    master   <none>
    	coredns-576cbf47c7-vg289         1/1     Running   0          30m     192.168.0.3    master   <none>
    	etcd-master                      1/1     Running   0          29m     10.120.67.25   master   <none>
    	kube-apiserver-master            1/1     Running   0          29m     10.120.67.25   master   <none>
    	kube-controller-manager-master   1/1     Running   0          29m     10.120.67.25   master   <none>
    	kube-proxy-7qbzl                 1/1     Running   0          30m     10.120.67.25   master   <none>
    	kube-proxy-g2xfc                 1/1     Running   0          2m11s   10.120.67.27   node2    <none>
    	kube-proxy-ln552                 1/1     Running   0          2m38s   10.120.67.26   node1    <none>
    	kube-scheduler-master            1/1     Running   0          29m     10.120.67.25   master   <none>
    

四、安裝kubernets-dashboard插件【master節點】

1. 生成自簽發證書

實際產線中是需要纔是有效簽發的public證書的。這裏測試採用自簽發證書。如果沒有配置證書在chrome瀏覽器是無法訪問dashboard的報錯信息如下
在這裏插入圖片描述

自簽發證書這裏採用的是openssl,可以參考官方文檔這裏

openssl genrsa -des3 -passout pass:x -out dashboard.pass.key 2048
openssl rsa -passin pass:x -in dashboard.pass.key -out dashboard.key
openssl req -new -key dashboard.key -out dashboard.csr
openssl x509 -req -sha256 -days 365 -in dashboard.csr -signkey dashboard.key -out dashboard.crt

# 這種方式僅僅作爲測試與驗證使用
mkdir /opt/certs
mv dashboard.crt dashboard.key /opt/certs/
scp -r /opt/certs  node1:/opt/
scp -r /opt/certs node2:/opt/

2. 下載kubernetes-dashboard.yaml

wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
docker pull mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.0

3. 編輯kubernetes-dashboard.yaml

此處只顯示修改的信息,其他信息省略。有些內容的修改還不知其所以然,參考

  • image鏡像從google的替換爲docker.io或者國內鏡像
  • 添加type: Nodeport 和nodePort: 30001 ,這個端口自行定義,我直接參考文檔的配置了
  • 將serviceAccountName: kubernetes-dashboard改爲serviceAccountName: kubernetes-dashboard-admin ,參考文檔這麼修改的,下次測試下不該應該如何做
  • 使用–auto-generate-certificates自動生成證書,訪問dashboard報錯:NET::ERR_CERT_INVALID, 解決這個問題,將生成好的dashboard.crt和dashboard.key掛載到容器的/certs下。這種方式也就是測試驗證用用,產線如何處理待研究
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# ------------------- Dashboard Secret ------------------- #

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kube-system
type: Opaque

---
# ------------------- Dashboard Service Account ------------------- #

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Role & Role Binding ------------------- #

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
rules:
  # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["create"]
  # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["create"]
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  verbs: ["get", "update", "delete"]
  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["kubernetes-dashboard-settings"]
  verbs: ["get", "update"]
  # Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
  resources: ["services"]
  resourceNames: ["heapster"]
  verbs: ["proxy"]
- apiGroups: [""]
  resources: ["services/proxy"]
  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Deployment ------------------- #

kind: Deployment
apiVersion: apps/v1beta2
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
      - name: kubernetes-dashboard
        #image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
        image: mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.0
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          - --auto-generate-certificates
          - --token-ttl=5400
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          # - --apiserver-host=http://my-address:port
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
          # Create on-disk volume to store exec logs
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: kubernetes-dashboard-certs
        hostPath:
          path: /opt/certs
          type: Directory
        #secret:
        #  secretName: kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard-admin
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule

---
# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard

4. 安裝dashboard

kubectl apply -f kubernetes-dashboard.yaml

5. 授予dashboard賬戶集羣管理權限

如果不授予權限就會報錯。新建 kubernetes-dashboard-admin.rbac.yaml

# ~] vi  kubernetes-dashboard-admin.rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-admin
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard-admin
  namespace: kube-system

kubectl apply -f  kubernetes-dashboard-admin.rbac.yaml

6.訪問dashboard【token方式】

  1. 獲取token[master節點]
kubectl -n kube-system get secret | grep kubernetes-dashboard-admin|awk '{print "secret/"$1}'|xargs kubectl describe -n kube-system|grep token:|awk -F : '{print $2}'|xargs echo

eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi1xN25rNSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6Ijc2OGY4OTRmLWM2NjgtMTFlOC1hZWVjLWZhMTYzZTllZGZjYyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.EAWV6UEZHHNorRLdZzx7MBYzoWYtviRxQXL6A35Np-LXD51v9zq0AEGOPlCuhVUwVZWDqKS131G-Fz67heLCt8H-R0cplOryp6Wnn1K9LzmbdCqAf-7q5VeY3j0Xj7paxdpAX70pSGGLWZrw6BGxrHetGDS2AoV3zaoTXWqn3PYdk-rtEnvTcvPwQWShR1BWI749IXwbOnv3g8f6kHfUQZ9xP6duKQ422PHYdreRqrq9Aym-Nv3PFHhvItWBeGhy1Vr39AOB4vECGAP3tS4-j5mXqj4enCVEjgmbgQ6u__qPT-m-kXcjQyXc1BpyTdr3EzLBQYc4gAi8uIl99CTnEQ
  1. 使用獲取的token訪問dashboard
    https://10.120.67.25:30001
    在這裏插入圖片描述

  2. 登錄成功後,如下
    在這裏插入圖片描述
    到這裏我們的第一篇就完成了,現在集羣有3個節點一個master,2個node。這裏面有很多的概念,默認創建了3個命名空間。這些概念都是什麼含義,如何使用,是接下來的實驗目標了。
    在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章