Kubernetes实录(4) 使用kubeadm配置3节点kubernets 1.12.0集群

Kubernetes实录系列记录文档完整目录参考: Kubernetes实录-目录

相关记录链接地址 :

本篇记录在几乎没有接触过K8S系统的情况下(docker接触并使用过),查询官方文档以及网络博客,使用kubeadm部署一个3节点的Kubernetes集群,其中:

  1. 1个master节点,2个node节点;
  2. kubernetes使用1.12.0版本(当前最新版本),Docker使用docker-ce-18.06.1版本。
    docker使用18.06版本而不使用更高版本的原因是k8s官方文档说该版本与k8s 1.12.0版本的集成是经过系统测试的,而更高版本则没有可能存在兼容性问题需要主要自己解决,初次使用为了不给自己找麻烦就使用这个版本了。

记录遵循实际操作顺序进行编辑:处理操作系统–> 处理kubernetes基本组件安装 --> 初始化 --> 安装插件 --> 验证集群访问

主机名称 ip地址 操作系统 角色 软件版本 备注
master 10.120.67.25 CentOS 7.5 master k8s 1.12.0 docker-ce 18.06
node1 10.120.67.26 CentOS 7.5 node k8s 1.12.0 docker-ce 18.06
node2 10.120.67.27 CentOS 7.5 node k8s 1.12.0 docker-ce 18.06

网络插件采用calico,部署在master节点。

一、更新并初始操作系统

1. 关闭防火墙[所有节点执行]

systemctl stop firewalld.service
systemctl disable firewalld.service

2. 禁用selinux[所有节点执行]

sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
setenforce 0

3. 关闭系统交换空间swap[所有节点执行]

swap不关闭kubelet可能会出错,如果不关闭也可以修改kubelet的配置

# 我使用的是KVM虚拟机,本身就没有使用swap空间,这里记录下设置方式[来自网络文章]
swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab

4. 配置内核参数(网络相关)[所有节点执行]

# 我使用的虚拟机模板配置了很多内核参数,这里就不列出来了,只关心相关的。
yum install bridge-utils -y
modprobe br_netfilter

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

# 使配置生效
sysctl --system

# 设置ulimit,根据需要设置,这里验证环境如此设置足够了。
cat <<EOF > /etc/security/limits.d/90-nproc.conf
*          soft    nproc     20480
*          hard    nproc     20480
*          soft    nofile    102400
*          hard    nofile    102400
root       soft    nproc     unlimited
EOF

5. 设置主机名称并配置hosts文件[所有节点执行]

# 分别在各自节点执行
hostnamectl set-hostname master
hostnamectl set-hostname node1
hostnamectl set-hostname node2

# 所有节点执行,添加解析记录(这里使用cat命令展示设置后的内容)
# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.120.67.25  master
10.120.67.26  node1
10.120.67.27  node2

6. 配置时钟服务器[所有节点执行]

master网络时钟指向公网默认时钟源,其他node节点统一使用master作为时钟源

# master 配置
	vi /etc/chrony.conf 
	...
	allow 10.120.67.0/24
	
	# 启动chronyd 
	systemctl enable chronyd.service
	systemctl start chronyd.service

# 各个node节点配置
	vi /etc/chrony.conf
	...
	server master iburst  # 将默认的使用master替换掉[注释不包含在配置文件中]
	
	# 启动服务
	systemctl enable chronyd.service
	systemctl start chronyd.service

7. 系统更新并重启[所有节点执行]

yum -y install epel-release
yum -y update
# 更新完成后重启操作系统
reboot

二、安装docker-ce

1. 安装依赖[所有节点执行]

# 我使用的虚拟机模版默认已经配置了很多系统工具,这里是从文档查阅需要的。
yum -y install yum-utils device-mapper-persistent-data lvm2 conntrack-tools bridge-utils ipvsadm

2. 添加docker软件源(使用国内)[所有节点执行]

yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# 查看docker-ce版本列表
yum list docker-ce --showduplicates|sort -r
	docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable     # 使用这个版本
	docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable
	docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable
	docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable
	docker-ce.x86_64 17.12.1.ce-1.el7.centos docker-ce-stable
	docker-ce.x86_64 17.12.0.ce-1.el7.centos docker-ce-stable
	docker-ce.x86_64 17.09.1.ce-1.el7.centos docker-ce-stable
	docker-ce.x86_64 17.09.0.ce-1.el7.centos docker-ce-stable
	docker-ce.x86_64 17.06.2.ce-1.el7.centos docker-ce-stable
	docker-ce.x86_64 17.06.1.ce-1.el7.centos docker-ce-stable
	docker-ce.x86_64 17.06.0.ce-1.el7.centos docker-ce-stable
	docker-ce.x86_64 17.03.3.ce-1.el7 docker-ce-stable         
	docker-ce.x86_64 17.03.2.ce-1.el7.centos docker-ce-stable
	docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable
	docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable

3. 安装指定版本docker-ce[18.06.1] [所有节点执行]

yum -y install docker-ce-18.06.1.ce

4. 安装yum插件,用来固定docker版本[所有节点执行]

yum -y install yum-plugin-versionlock
yum versionlock docker-ce
yum versionlock list
    0:docker-ce-18.03.1.ce-1.el7.centos.*
    versionlock list done

5. 配置docker自启动[所有节点执行]

systemctl enable docker.service
systemctl start docker.service

三、安装配置kubernets组件

1. 配置kubernets 软件源[所有节点执行]

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum clean all && yum makecache

2. 安装kubeadm/kubelet/kubectl[所有节点执行]

yum -y install kubelet kubeadm kubectl

3. 启动kubelet [所有节点执行]

systemctl enable kubelet.service
systemctl start kubelet.service
# 备注,此时启动kubelet是存在问题,随后的配置更新会恢复
systemctl status kubelet.service

4. 处理有关k8s的image拉取[所有节点]

k8s的image是在google的一个下级域名服务上,如果是在国外或者已经配置科学上网[实际在配置了代理后还是有些问题],可以正常访问google的情况下,这一步是不需要的,直接进行下一步即可,否则报错卡着无法进行了。错误信息截取如下

[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[preflight] Some fatal errors occurred:
	[ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-apiserver:v1.12.0]: exit status 1
	[ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-controller-manager:v1.12.0]: exit status 1
	[ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-scheduler:v1.12.0]: exit status 1
	[ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-proxy:v1.12.0]: exit status 1
	[ERROR ImagePull]: failed to pull image [k8s.gcr.io/pause:3.1]: exit status 1
	[ERROR ImagePull]: failed to pull image [k8s.gcr.io/etcd:3.2.24]: exit status 1
	[ERROR ImagePull]: failed to pull image [k8s.gcr.io/coredns:1.2.2]: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

该步骤用于从k8s相关mirrors源拉取k8s相关组件的images,也可以通过自己搭建私有源实现。参考

# 第一步:拉取image
docker pull mirrorgooglecontainers/kube-apiserver:v1.12.0
docker pull mirrorgooglecontainers/kube-controller-manager:v1.12.0
docker pull mirrorgooglecontainers/kube-scheduler:v1.12.0
docker pull mirrorgooglecontainers/kube-proxy:v1.12.0
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.2.24
docker pull coredns/coredns:1.2.2

# 第二步:修改tag,版本信息根据实际情况修改
docker tag docker.io/mirrorgooglecontainers/kube-proxy:v1.12.0 k8s.gcr.io/kube-proxy:v1.12.0
docker tag docker.io/mirrorgooglecontainers/kube-scheduler:v1.12.0 k8s.gcr.io/kube-scheduler:v1.12.0
docker tag docker.io/mirrorgooglecontainers/kube-apiserver:v1.12.0 k8s.gcr.io/kube-apiserver:v1.12.0
docker tag docker.io/mirrorgooglecontainers/kube-controller-manager:v1.12.0 k8s.gcr.io/kube-controller-manager:v1.12.0
docker tag docker.io/mirrorgooglecontainers/etcd:3.2.24  k8s.gcr.io/etcd:3.2.24
docker tag docker.io/mirrorgooglecontainers/pause:3.1  k8s.gcr.io/pause:3.1
docker tag docker.io/coredns/coredns:1.2.2  k8s.gcr.io/coredns:1.2.2

# 第三步:docker rmi删除不用镜像
docker rmi docker.io/mirrorgooglecontainers/kube-proxy:v1.12.0
docker rmi docker.io/mirrorgooglecontainers/kube-scheduler:v1.12.0
docker rmi docker.io/mirrorgooglecontainers/kube-apiserver:v1.12.0
docker rmi docker.io/mirrorgooglecontainers/kube-controller-manager:v1.12.0
docker rmi docker.io/mirrorgooglecontainers/etcd:3.2.24
docker rmi docker.io/mirrorgooglecontainers/pause:3.1
docker rmi docker.io/coredns/coredns:1.2.2

5. 使用kubeadm初始化master [master节点执行]

关键返回信息:Your Kubernetes master has initialized successfully!

# 指定kubernetes版本,并设置一下pod-network-cidr[该设置在下面的网络插件中用到]:
kubeadm init --kubernetes-version=v1.12.0 --pod-network-cidr=192.168.0.0/16

[init] using Kubernetes version: v1.12.0
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [10.120.67.25 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [master localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.120.67.25]
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 20.502782 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node master as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation
[bootstraptoken] using token: ainpz2.3ctx0qrsr1gj78tw
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 10.120.67.25:6443 --token ainpz2.3ctx0qrsr1gj78tw --discovery-token-ca-cert-hash sha256:e2566c3261a35872ceb1a2e4866d90c9c988cbf8f842406e043a7325b2a19657

7. 设置config[master节点执行]

我是使用root账户配置的,另外我还有一个admin账户(应用发布用的普通账户),我在这2个账户下设置config(实际我都是使用root测试的)

# 切换到相应的账户下,进行操作
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

8. 配置网络插件Calico[master节点执行]

# 配置网络插件之前,系统状态
kubectl get pod -n kube-system
	NAME                             READY     STATUS    RESTARTS   AGE
	coredns-78fcdf6894-dxnj2         0/1       Pending   0          14m
	coredns-78fcdf6894-tbzx2         0/1       Pending   0          14m
	etcd-master                      1/1       Running   1          4m
	kube-apiserver-master            1/1       Running   1          4m
	kube-controller-manager-master   1/1       Running   1          4m
	kube-proxy-jhrw4                 1/1       Running   1          14m
	kube-scheduler-master            1/1       Running   1          4m

#如果状态出现Pending、NodeLost是什么原因,如何解决

# 安装calico,中间的版本号可以根据需要替换(也可以直接下载下来本地执行)
#可以先将镜像拉取
docker pull quay.io/calico/typha:v0.7.4
docker pull quay.io/calico/node:v3.1.3
docker pull quay.io/calico/cni:v3.1.3

kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

kubectl get pods --all-namespaces
	NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
	kube-system   calico-node-txj2z                1/2     Running   0          4s
	kube-system   coredns-576cbf47c7-m97nz         0/1     Pending   0          55s
	kube-system   coredns-576cbf47c7-vg289         0/1     Pending   0          55s
	kube-system   etcd-master                      0/1     Pending   0          2s
	kube-system   kube-controller-manager-master   1/1     Running   0          9s
	kube-system   kube-proxy-7qbzl                 1/1     Running   0          55s
	kube-system   kube-scheduler-master            0/1     Pending   0          4s

#测试健康情况
kubectl get cs
	NAME                 STATUS    MESSAGE              ERROR
	controller-manager   Healthy   ok                   
	scheduler            Healthy   ok                   
	etcd-0               Healthy   {"health": "true"} 

9. 将node加入集群

  1. 从master将kubelet文件分别复制到node1、node1

    该步骤来自网络文档,实际上不需要。默认这个文件内容参数为空值,也就是默认自动探测。

    	# [master节点执行]
        scp /etc/sysconfig/kubelet node1:/etc/sysconfig/kubelet
        scp /etc/sysconfig/kubelet node2:/etc/sysconfig/kubelet
    
  2. 执行 kubeadm join的命令即可【在需要加入集群的node节点执行】

    	kubeadm join 10.120.67.25:6443 --token esj9lg.xcowk6gklt3ihjbl --discovery-token-ca-cert-hash sha256:22ec8f04f46472556ebb5639b1d4399a1f8e612cf5bce899c56da9a28d659423
    

    备注:如果后期需要添加新的节点,而kubeadm join的参数忘记,可以通过如下命令获取:
    kubeadm token create --print-join-command

  3. 测试节点信息【master节点执行】

    kubectl get nodes
    	# 备注:从NotReady变成Ready状态的时间可能有点长,请等待下
    	NAME      STATUS     ROLES     AGE       VERSION
    	master    Ready      master    20m       v1.11.3
    	node1     Ready      <none>    50s       v1.11.3
    	node2     Ready         <none>    40s       v1.11.3
    	
    kubectl get pod -n kube-system -o wide
    	NAME                             READY   STATUS    RESTARTS   AGE     IP             NODE     NOMINATED NODE
    	calico-node-ljqcv                2/2     Running   0          2m11s   10.120.67.27   node2    <none>
    	calico-node-lktsv                2/2     Running   0          2m38s   10.120.67.26   node1    <none>
    	calico-node-txj2z                2/2     Running   0          29m     10.120.67.25   master   <none>
    	coredns-576cbf47c7-m97nz         1/1     Running   0          30m     192.168.0.2    master   <none>
    	coredns-576cbf47c7-vg289         1/1     Running   0          30m     192.168.0.3    master   <none>
    	etcd-master                      1/1     Running   0          29m     10.120.67.25   master   <none>
    	kube-apiserver-master            1/1     Running   0          29m     10.120.67.25   master   <none>
    	kube-controller-manager-master   1/1     Running   0          29m     10.120.67.25   master   <none>
    	kube-proxy-7qbzl                 1/1     Running   0          30m     10.120.67.25   master   <none>
    	kube-proxy-g2xfc                 1/1     Running   0          2m11s   10.120.67.27   node2    <none>
    	kube-proxy-ln552                 1/1     Running   0          2m38s   10.120.67.26   node1    <none>
    	kube-scheduler-master            1/1     Running   0          29m     10.120.67.25   master   <none>
    

四、安装kubernets-dashboard插件【master节点】

1. 生成自签发证书

实际产线中是需要才是有效签发的public证书的。这里测试采用自签发证书。如果没有配置证书在chrome浏览器是无法访问dashboard的报错信息如下
在这里插入图片描述

自签发证书这里采用的是openssl,可以参考官方文档这里

openssl genrsa -des3 -passout pass:x -out dashboard.pass.key 2048
openssl rsa -passin pass:x -in dashboard.pass.key -out dashboard.key
openssl req -new -key dashboard.key -out dashboard.csr
openssl x509 -req -sha256 -days 365 -in dashboard.csr -signkey dashboard.key -out dashboard.crt

# 这种方式仅仅作为测试与验证使用
mkdir /opt/certs
mv dashboard.crt dashboard.key /opt/certs/
scp -r /opt/certs  node1:/opt/
scp -r /opt/certs node2:/opt/

2. 下载kubernetes-dashboard.yaml

wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
docker pull mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.0

3. 编辑kubernetes-dashboard.yaml

此处只显示修改的信息,其他信息省略。有些内容的修改还不知其所以然,参考

  • image镜像从google的替换为docker.io或者国内镜像
  • 添加type: Nodeport 和nodePort: 30001 ,这个端口自行定义,我直接参考文档的配置了
  • 将serviceAccountName: kubernetes-dashboard改为serviceAccountName: kubernetes-dashboard-admin ,参考文档这么修改的,下次测试下不该应该如何做
  • 使用–auto-generate-certificates自动生成证书,访问dashboard报错:NET::ERR_CERT_INVALID, 解决这个问题,将生成好的dashboard.crt和dashboard.key挂载到容器的/certs下。这种方式也就是测试验证用用,产线如何处理待研究
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# ------------------- Dashboard Secret ------------------- #

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kube-system
type: Opaque

---
# ------------------- Dashboard Service Account ------------------- #

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Role & Role Binding ------------------- #

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
rules:
  # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["create"]
  # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["create"]
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  verbs: ["get", "update", "delete"]
  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["kubernetes-dashboard-settings"]
  verbs: ["get", "update"]
  # Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
  resources: ["services"]
  resourceNames: ["heapster"]
  verbs: ["proxy"]
- apiGroups: [""]
  resources: ["services/proxy"]
  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Deployment ------------------- #

kind: Deployment
apiVersion: apps/v1beta2
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
      - name: kubernetes-dashboard
        #image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
        image: mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.0
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          - --auto-generate-certificates
          - --token-ttl=5400
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          # - --apiserver-host=http://my-address:port
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
          # Create on-disk volume to store exec logs
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: kubernetes-dashboard-certs
        hostPath:
          path: /opt/certs
          type: Directory
        #secret:
        #  secretName: kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard-admin
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule

---
# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard

4. 安装dashboard

kubectl apply -f kubernetes-dashboard.yaml

5. 授予dashboard账户集群管理权限

如果不授予权限就会报错。新建 kubernetes-dashboard-admin.rbac.yaml

# ~] vi  kubernetes-dashboard-admin.rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-admin
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard-admin
  namespace: kube-system

kubectl apply -f  kubernetes-dashboard-admin.rbac.yaml

6.访问dashboard【token方式】

  1. 获取token[master节点]
kubectl -n kube-system get secret | grep kubernetes-dashboard-admin|awk '{print "secret/"$1}'|xargs kubectl describe -n kube-system|grep token:|awk -F : '{print $2}'|xargs echo

eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi1xN25rNSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6Ijc2OGY4OTRmLWM2NjgtMTFlOC1hZWVjLWZhMTYzZTllZGZjYyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.EAWV6UEZHHNorRLdZzx7MBYzoWYtviRxQXL6A35Np-LXD51v9zq0AEGOPlCuhVUwVZWDqKS131G-Fz67heLCt8H-R0cplOryp6Wnn1K9LzmbdCqAf-7q5VeY3j0Xj7paxdpAX70pSGGLWZrw6BGxrHetGDS2AoV3zaoTXWqn3PYdk-rtEnvTcvPwQWShR1BWI749IXwbOnv3g8f6kHfUQZ9xP6duKQ422PHYdreRqrq9Aym-Nv3PFHhvItWBeGhy1Vr39AOB4vECGAP3tS4-j5mXqj4enCVEjgmbgQ6u__qPT-m-kXcjQyXc1BpyTdr3EzLBQYc4gAi8uIl99CTnEQ
  1. 使用获取的token访问dashboard
    https://10.120.67.25:30001
    在这里插入图片描述

  2. 登录成功后,如下
    在这里插入图片描述
    到这里我们的第一篇就完成了,现在集群有3个节点一个master,2个node。这里面有很多的概念,默认创建了3个命名空间。这些概念都是什么含义,如何使用,是接下来的实验目标了。
    在这里插入图片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章