Kubernetes + Dashboard 集羣搭建

1、環境說明

  • 基於kubeadm工具部署k8s 集羣
    • (還有基於二進制的部署方式但是需要單獨部署k8s的每個組件比較繁瑣)
    • kubeadm是 Kubernetes官⽅提供的⽤於快速部署Kubernetes集羣的⼯具
  • 基於Kubernetes v1.17.0 版本部署

2、搭建要求

  • k8s對於服務器的系統配置有一定要求

1、一臺或多臺運行着下列系統的機器:

  • Ubuntu 16.04+
  • Debian 9+
  • CentOS 7+
  • Red Hat Enterprise Linux (RHEL) 7+
  • Fedora 25+
  • HypriotOS v1.0.1+
  • Flatcar Container Linux (使用 2512.3.0 版本測試通過)

2、每臺機器 2 GB 或更多的 RAM (如果少於這個數字將會影響你應用的運行內存)
3、 CPU爲2核或更多
4、集羣中的所有機器的網絡彼此均能相互連接(公網和內網都可以),並且可以訪問外網
5、節點之中不可以有重複的主機名、MAC 地址或 product_uuid
6、開啓機器上的某些端口、請參見這裏 瞭解更多詳細信息。
7、禁用swap分區。爲了保證 kubelet 正常工作,你 必須 禁用交換分區




3、集羣規劃

服務器列表

主機 角色 IP
hadoop300 Master 192.168.2.100
hadoop301 Node-1 192.168.2.101
hadoop302 Node-2 192.168.2.102

每臺主機需安裝的組件

組件 hadoop300(master) hadoop301 hadoop302
kubelet V V V
Docker V V V
kubeadm V V V
kubectl V V V
  • Docker: 使用Docker作爲Kubernetes的CRI(容器運行時)
    • tip: 聽說最新版本的k8s不在使用docker了
  • kubelet : 它運⾏在所有節點上,主要負責去啓動容器和 Pod
  • kubeadm: 一個簡化部署k8s的工具,部署後基本很少使用, 除非升級k8s
  • kubectl: k8s命令⾏⼯具,通過它去與k8s對話, 管理k8s的各種資源.

4、架構

4.1 Kubernetes組件架構

在這裏插入圖片描述

  • API服務器: 就是一個web服務, 對k8s各種資源(Pod,Service,RC) 的增刪改查, 也是集羣內各個組件之間數據交換的中間人
  • etcd: 採用raft協議作爲一致性算法實現的分佈式的key-value數據庫, 用於存儲資源信息
  • Controller Manager: 應用集羣管理者, 管理副本,節點,資源,命名空間,服務等. 如RC,RS
  • Scheduler: 負責把 Pod 調度到 Node 上,不過調度完以後就由 kubelet 來管理 Node 了。
  • kubelet: 處理Scheduler 的調度任務並且完成資源調度以後,kubelet 進程會在 APIServer 上註冊 Node 信息,定期向 Master 彙報 Node 信息, 同時管理 Pod 及 Pod 中的容器,
  • proxy: 實現service的通信與負載均衡機制, 負責爲Pod創建代理服務,實現server到Pod的請求路由和轉發,從而實現K8s層級的虛擬轉發網絡。

4.2 Kubernetes網絡架構

  • 節點與節點通信通過物理網卡

  • 跨節點的Pod之間通信通過虛擬的網絡層(比如Flannel或者Calico)

  • 同一節點的Pod之間通信通過docker虛擬網橋

  • Pod內部的容器間的通信通過共享網絡空間Pause

  • 外網的通信通過Service層

在這裏插入圖片描述

這裏選擇的Flannel 網絡方案, 具體架構爲:

在這裏插入圖片描述

  • Flannel是針對Kubernetes設計的一個網絡規劃服務,是一種應用層網絡, 覆蓋網絡, 通過在每個節點上部署一個flanneld進程去監聽一個端口的TCP數據包, 收到數據包後去對原始網絡通信轉發協議再包了一層, 就像HTTP是在TCP之上再封裝一層數據包, 然後可以通過域名訪問.
  • 比如節點1的flanneld進程會監聽一個端口, 並且將收到來自節點1的Pod的數據後進行協議包裝, 然後根據協議轉發到節點2的Flanneld進程, 然後其再轉發給對應的Pod, 從而實現跨節點的Pod的通信.
  • etcd是一種key-value分佈式數據庫, 維護所有docker網橋和其下的pod的所有ip地址的映射關係, 通過這個映射關係就可以知道最終要訪問的pod是在哪個主機下的哪個docker網橋下的哪個ip地址, 並且保證了所有node上flanned所看到的配置是一致的。同時每個node上的flanned監聽etcd上的數據變化,實時感知集羣中node的變化。

4.3 部署應用的工作流程

  • 當運行 kubectl 命令時 通過向 Kubemetes API 服務器發送一個 HTTP 請求,在集羣中創建一個新的 Replication Controller 對象 然後,ReplicationController 建了1 個新的pod ,調度器Scheduler將其調度到 一個工作節點上, Kubelet 看到 pod 被調度到節點上,就告知 Docker 從鏡像中心中拉取指定的鏡像,因爲本地沒有該鏡像, 下載鏡像後, Docker會啓動運行該容器
    在這裏插入圖片描述

5、安裝

假設你已經爲集羣每臺服務器安裝了docker, 配置了獨立主機名,靜態IP地址, 關閉了防火牆, 並且能互相免密登陸

5.1 前期準備

5.1.1、禁⽤SELINUX

  • linux的 SELINUX 主要作用就是最大限度地減小系統中服務進程可訪問的資源

每臺主機都要配置

  • 修改/etc/selinux/config 文件, 修改如下:
SELINUX=disabled

或者直接執行下面👇命令

  • xcall 是可以同時在多臺服務器上調用一個命令
[[email protected] ~]$ xcall "sed -i 's/\SELINUX=.*/SELINUX=disabled/' /etc/selinux/config"

5.1.2、關閉swap分區

  • linux 的 swap也叫虛擬內存, 在物理內存不夠用時,操作系統會從物理內存中把部分暫時不被使用的數據轉移到交換分區,從而爲當前運行的程序留出足夠的物理內存空間. 但是k8s認爲非常影響性能

每臺主機都要配置
修改 /etc/fstab 文件, 修改如下:

# 註釋掉即可
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

重啓後查看swap狀態

[[email protected] ~]# free -mh
              total        used        free      shared  buff/cache   available
Mem:           3.8G        815M        2.2G         10M        795M        2.8G
Swap:            0B          0B          0B

5.1.3、配置docker的 阿里雲下載源(可選)

  • 在國內網絡不好情況下, k8s拉取鏡像總是失敗導致應用部署失敗

每臺主機都要配置
修改/etc/docker/daemon.json 文件, 修改如下

{
"registry-mirrors":["https://jccl15o4.mirror.aliyuncs.com"]
}

之後重啓docker配置

[[email protected] tmp]# systemctl daemon-reload
[[email protected] tmp]# systemctl restart docker

5.1.4、配置阿里雲yum軟件源

創建/etc/yum.repos.d/kubrenetes.repo 文件

[[email protected] ~]$ cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

5.2 安裝kubeadm,kubelet和kubectl

  • 這些組件都在yum源可以下,直接yum安裝即可

每臺主機都要

[[email protected] ~]$ yum install -y kubelet-1.17.0 kubeadm-1.17.0 kubectl-1.17.0

設置 kubelet 爲 開機啓動

[[email protected] ~] systemctl enable kubelet
[[email protected] ~] systemctl start kubelet

5.3 部署Master節點

  • 使用kubeadm工具初始化 k8s集羣
    • 確保docker已啓動
    • 確保root用戶執行

5.3.1 初始化K8s集羣

在hadoop300主機執行

[[email protected] ~]$kubeadm init \
  --apiserver-advertise-address=192.168.2.100 \
  --image-repository registry.aliyuncs.com/google_containers \
  --kubernetes-version v1.17.0 \
  --pod-network-cidr=10.244.0.0/16 

[參數說明]

  • apiserver-advertise-address: 指定 API Server的IP地址
  • image-repository: 默認拉取鏡像地址k8s.gcr.io國內無法訪問,這裏指定阿里雲鏡像倉庫地址
  • kubernetes-version: 部署的k8s版本
  • pod-network-cidr: 指定Pod網絡的IP地址範圍, 選用不同的網絡方案值可能不同,這裏選擇Flannel網絡方案, 填寫10.244.0.0/16 即可

等待執行完成,把執行日誌保存下載,後面會用到</mark

[[email protected] hadoop]$ kubeadm init \
>   --apiserver-advertise-address=192.168.2.100 \
>   --image-repository registry.aliyuncs.com/google_containers \
>   --kubernetes-version v1.17.0 \
>   --pod-network-cidr=10.244.0.0/16 
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [hadoop300 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.2.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [hadoop300 localhost] and IPs [192.168.2.100 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [hadoop300 localhost] and IPs [192.168.2.100 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0125 01:09:06.409400  9705 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 34.503496 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[mark-control-plane] Marking the node hadoop300 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node hadoop300 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: ycni4f.ru3eby1og6qasmzc
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.2.100:6443 --token ycni4f.ru3eby1og6qasmzc \
    --discovery-token-ca-cert-hash sha256:d46b209f85303f3bffcdacd4ecc4f3856eb4198ce41c60f871f2c0a8d6ce162f 

5.3.2 配置kubectl工具

  • 初始化k8s集羣后kubectl是不能使用的,需要配置一下
  • init日誌可以看出需要把/etc/kubernetes/admin.conf 文件配置成能全局訪問到即可, 可以自定配置到環境變量, 也可以按照init日誌的方法進行存放
[[email protected] ~]#mkdir -p $HOME/.kube # 此目錄主要保存一些k8s的配置緩存之類
[[email protected] ~]#sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[[email protected] ~]#sudo chown $(id -u):$(id -g) $HOME/.kube/config # 屬主和屬組受給當前用戶

使用kubectl 查看 k8s節點的狀態

  • 因爲還未構建扁平化的網絡方案, 所以還是master節點還是NotReady狀態
[[email protected] ~]$ kubectl get node
NAME        STATUS     ROLES    AGE   VERSION
hadoop300   NotReady   master   17m   v1.17.0

5.4 安裝網絡插件Flannel

/proc/sys/net/bridge/bridge-nf-call-iptables 文件設置爲1(每臺服務器都要配置)

[[email protected] ~]echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
  • 直接通過資源清單部署即可(master節點執行即可)
[[email protected] ~]$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

之後等待一段時間後查看flannel的pod是否運行成功, 然後master節點就變爲Ready狀態了

[[email protected] ~]$ kubectl get pod  --all-namespaces
NAME                                READY   STATUS    RESTARTS   AGE
coredns-9d85f5447-2jq9g             1/1     Running   0          58m
coredns-9d85f5447-tlgbk             1/1     Running   0          58m
etcd-hadoop300                      1/1     Running   1          58m
kube-apiserver-hadoop300            1/1     Running   1          58m
kube-controller-manager-hadoop300   1/1     Running   1          58m
kube-flannel-ds-gqj6v               1/1     Running   0          22m
kube-proxy-w6f49                    1/1     Running   1          58m
kube-scheduler-hadoop300            1/1     Running   1          58m

# 查看節點狀態
[[email protected] ~]$ kubectl get node
NAME        STATUS   ROLES    AGE   VERSION
hadoop300   Ready    master   60m   v1.17.0

5.5 slave節點加入集羣

加入集羣命令

  • kubeadm join --token <token> <master-ip>:<master-port> --discoverytoken-ca-cert-hash sha256:<hash>

token從上面的init日誌中獲取, 如果未保存自行通過kubeadm工具獲取

如下將兩個節點hadoop301和hadoop302假如k8s集羣

[[email protected] ~]$ kubeadm join 192.168.2.100:6443  \
    --token ycni4f.ru3eby1og6qasmzc \
    --discovery-token-ca-cert-hash sha256:d46b209f85303f3bffcdacd4ecc4f3856eb4198ce41c60f871f2c0a8d6ce162f
    
[[email protected] ~] # 同上

要等待一段時間, 因爲slave節點拉取Flannel可能比較慢:
此時再查看3個節點的狀態都變爲Ready了

[[email protected] ~]$ kubectl get node -o wide
NAME        STATUS   ROLES    AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
hadoop300   Ready    master   88m   v1.17.0   192.168.2.100   <none>        CentOS Linux 7 (Core)   3.10.0-1160.11.1.el7.x86_64   docker://1.13.1
hadoop301   Ready    <none>   13m   v1.17.0   192.168.2.101   <none>        CentOS Linux 7 (Core)   3.10.0-1160.11.1.el7.x86_64   docker://1.13.1
hadoop302   Ready    <none>   17m   v1.17.0   192.168.2.102   <none>        CentOS Linux 7 (Core)   3.10.0-1160.11.1.el7.x86_64   docker://1.13.1

6 測試K8s 之 Helloworld

  • 發佈應用到k8s集羣中測試效果

6.1 將應用打包成鏡像並推送

  • 這裏使用的是springboot應用, 裏面寫了一個接口 /user
  • 先準備好該應用的jar包

1、編寫DrockerFile

FROM java:8
VOLUME /tmp
ADD test.jar test.jar
ENTRYPOINT ["nohup","java","-jar","/test.jar","&"]

2、然後打包成鏡像

[[email protected] tmp]# pwd
/home/hadoop/tmp
[[email protected] tmp]# ll
-rw-rw-r-- 1 hadoop hadoop       98 1月  31 21:01 Dockerfile
-rw-rw-r-- 1 hadoop hadoop 19329878 1月  31 21:00 test.jar
[[email protected] tmp]# docker build -t springboot01 .
[[email protected] tmp]# docker images
REPOSITORY      TAG       IMAGE ID         CREATED             SIZE
springboot01  latest     d04728534480      4 days ago         663 MB

3、推送到Docker Hub鏡像倉庫

[[email protected] tmp]# docker login
[[email protected] tmp]# docker push burukeyou/springboot01

進入官網,可以看到已經推送成功了
在這裏插入圖片描述

6.2、編寫Pod資源清單

vim demo.yaml

apiVersion: v1
kind: Pod
metadata:
  name: demo-pod01			# pod名字
  namespace: default		# 所屬命名空間
  labels:
    demo: java				# 打標籤
    
spec:
  containers:
    - name: springboot01			# 容器名字
      image: burukeyou/springboot01	# 鏡像地址(剛纔推送的)
      imagePullPolicy: IfNotPresent	# 是否優先從本地拉取鏡像,如果不存在再從遠程拉取
      ports:
        - containerPort: 8080   # 啓動端口

6.3 發佈資源

  • 可以看到被部署到了hadoop302 節點,
  • 通過IP訪問 接口 /user 返回字符串 “user: {name: 30}”
[[email protected] tmp]# kubectl create -f demo.yaml 
pod/demo-pod01 created
[[email protected] tmp]# kubectl get pod -o wide
NAME         READY   STATUS    RESTARTS   AGE   IP            NODE        
demo-pod01   1/1     Running   0         42s   10.244.1.20   hadoop302   
[[email protected] tmp]# curl http://10.244.1.20:8080/user
user: {
   
   name: 30}

7、Dashboard 部署(可選)

  • 一個k8s集羣管理的可視化網頁
  • 本質也是作爲一個應用部署到k8s集羣, 所以也是通過資源清單發佈

7.1 通過資源清單部署

1、先把資源清單下載下來並修改

[[email protected] tmp]$ wget http://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
[[email protected] tmp]# vim recommended.yaml

2、然後修改Service類型爲NodePort便於外網訪問

# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort			# 修改爲NodePort類型的Service
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30443		# dashboard網頁的訪問端口(自定義)
  selector:
    k8s-app: kubernetes-dashboard

部署dashboard

[[email protected] tmp]# kubectl create -f recommended.yaml

7.2 啓動效果查看

查看 Dashboard的 Pod 和 services 是否啓動成功

[[email protected] ~]# kubectl get pods --all-namespaces 
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE
kubernetes-dashboard   dashboard-metrics-scraper-76585494d8-mclvf   1/1     Running   0          7s
kubernetes-dashboard   kubernetes-dashboard-5996555fd8-2xn6c        1/1     Running   0          8s


[[email protected] ~]# kubectl get services --all-namespaces 
NAMESPACE              NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   10.96.115.150   <none>        8000/TCP                 41s
kubernetes-dashboard   kubernetes-dashboard        NodePort    10.96.54.85     <none>        443:30443/TCP            42s

然後通過瀏覽器訪問, 訪問地址爲: https://k8s任意節點的IP:30443/

在這裏插入圖片描述

發現要登陸, 提供了兩種方式,這裏採用token登陸, 先創建一個用戶去登陸

7.3 創建管理用戶

1、編寫創建管理員並且綁定角色的資源清單 vim create-admin.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: hadoop	#  自定義用戶名
  namespace: kubernetes-dashboard

---
# 給用戶 hadoop 綁定角色
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: hadoop # 同上
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: hadoop	#  同上
  namespace: kubernetes-dashboard

發佈

[[email protected] tmp]# kubectl apply -f create-admin.yaml

獲取Token

# 查看所有 serviceaccount 和 secrets
[[email protected] tmp]# kubectl get sa,secrets -n kubernetes-dashboard
NAME                                  SECRETS   AGE
serviceaccount/default                1         61m
serviceaccount/hadoop                 1         31s
serviceaccount/kubernetes-dashboard   1         61m

NAME                                      TYPE                                  DATA   AGE
secret/default-token-wdt66                kubernetes.io/service-account-token   3      61m
secret/hadoop-token-6g859                 kubernetes.io/service-account-token   3      31s
secret/kubernetes-dashboard-certs         Opaque                                0      61m
secret/kubernetes-dashboard-csrf          Opaque                                1      61m
secret/kubernetes-dashboard-key-holder    Opaque                                2      61m
secret/kubernetes-dashboard-token-7thdm   kubernetes.io/service-account-token   3      61m

# 查看 hadoop用戶的secret的Token
[[email protected] tmp]#  kubectl describe secret hadoop-token-6g859 -n kubernetes-dashboard
Name:         hadoop-token-6g859
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: hadoop
              kubernetes.io/service-account.uid: 6a63be78-5de6-4892-8123-6a66378df504

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IjFNQzA0QXZhdXJTeUtVSHhPQ3pldkZ6NWdRM285cTlPUjdYTXoxWjJ1Q00ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJoYWRvb3AtdG9rZW4tNmc4NTkiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiaGFkb29wIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNmE2M2JlNzgtNWRlNi00ODkyLTgxMjMtNmE2NjM3OGRmNTA0Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmhhZG9vcCJ9.wr7g1CmIrgv9Jj-Ap1leow8zefIzotzS4LE_qO5VNziVqEcnhCYzP2q6GnkUYR4CJt7YtEBqF6OLvlB5mHBPmFtHgtp-LiFUujScKgDdx8jdTBcVmeb39Fw_knjBuSLBOd3fqdvXumBajlwKpDQL_gYnkhc7bxn5FICYfalf1PF3AoPq8WjR2VoCDnGBB1qeaT87e6xnflScx3l6NNSEN3Bl8Ymt8WJRi4Ch0nhUZPLAXZxgO3kt1-TWHo5wASYiMW4Xwb-kPv6yAgoNTm9h6jgGqimf2InEW9rGLbnRAR0O9ZelFI6G4bE5sXtdNL_YdaVTRcmUYUKusaMpEADquQ

快捷獲取Token命令

kubectl describe secret $(kubectl get secrets -n kubernetes-dashboard | grep hadoop | awk '{print $1}') -n kubernetes-dashboard | grep token:

然後拿着token去登陸即可

在這裏插入圖片描述

問題

Dashboard在瀏覽器打不開問題

三種方法:
1、可通過火狐瀏覽器強行打開

2、如果用chrome瀏覽器chrome瀏覽器提示不安全打不開, 在當前頁面用鍵盤輸入 thisisunsafe, 不是在地址欄輸入,是直接敲鍵盤

3、重新部署Dashboard, 關閉安全驗證

a)卸載Dashboard應用

[[email protected] tmp]# kubectl delete -f recommended.yaml

修改資源清單recommended.yaml 把Secret 註釋掉不創建

# ------------------- Dashboard Secret------------------- #
#apiVersion: v1
#kind: Secret
#metadata:
#  labels:
#    k8s-app: kubernetes-dashboard
#  name: kubernetes-dashboard-certs
#  namespace: kubernetes-dashboard
#type: Opaque

然後再部署dashboard

[[email protected] tmp]# kubectl create -f recommended.yaml

然後自己生成 之前註釋掉的secret 即 kubernetes-dashboard-certs

# 生成dashboard.key 文件
[[email protected] cert]# openssl genrsa -out dashboard.key 2048
[[email protected] cert]# ll
-rw-r--r-- 1 root root 1675 2月   4 02:27 dashboard.key
# 生成dashboard.csr 文件
[[email protected] cert]# openssl req -days 36000   -new -out dashboard.csr    -key 
[[email protected] cert]# ll
-rw-r--r-- 1 root root  903 2月   4 02:28 dashboard.csr
-rw-r--r-- 1 root root 1675 2月   4 02:27 dashboard.key
# 生成自簽證書
[[email protected] cert]# openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
# 使用自簽證書創建secret, 創建名爲  kubernetes-dashboard-certs的 generic secret 它包含兩個條目
# dashboard.key 和 dashboard.crt
[[email protected] cert] kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt  -n kubernetes-dashboard

# 查看 該secrets 是否創建成功
[[email protected] cert]# kubectl get sa,secrets -n kubernetes-dashboard
NAME                                  SECRETS   AGE
serviceaccount/default                1         23h
serviceaccount/kubernetes-dashboard   1         23h

NAME                                      TYPE                                  DATA   AGE
secret/default-token-92k64                kubernetes.io/service-account-token   3      23h
secret/kubernetes-dashboard-certs         Opaque                                2      23h
secret/kubernetes-dashboard-csrf          Opaque                                1      23h
secret/kubernetes-dashboard-key-holder    Opaque                                2      23h
secret/kubernetes-dashboard-token-lcxrm   kubernetes.io/service-account-token   3      23h

10、打賞

如果覺得文章有用,你可鼓勵下作者(支付寶)

在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章