树莓派 k8s 集群监控 Prometheus(centos)

k8s 集群上的监控首推 prometheus,但如果按照 x86 架构 k8s 集群安装 prometheus 的方法直接在树莓派 k8s 集群上安装 prometheus,适配的工作量比较大,不建议这么做。我推荐 github 上一个大神的作品https://github.com/carlosedp/cluster-monitoring,经反复验证,在树莓派 k8s 集群基本可用,这里我简单介绍一下安装过程。

准备

环境

  • 树莓派 k8s 集群:最好 3 节点,单节点也可以
[root@pi4-master01 ~]# kubectl get nodes -o wide
NAME           STATUS   ROLES    AGE     VERSION    INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                   KERNEL-VERSION    CONTAINER-RUNTIME
pi4-master01   Ready    master   43h     v1.18.20   10.168.1.101   <none>        CentOS Linux 7 (AltArch)   5.4.72-v8.1.el7   docker://19.3.8
pi4-node01     Ready    node     43h     v1.18.20   10.168.1.102   <none>        CentOS Linux 7 (AltArch)   5.4.72-v8.1.el7   docker://19.3.8
pi4-node02     Ready    node     5h44m   v1.18.20   10.168.1.103   <none>        CentOS Linux 7 (AltArch)   5.4.72-v8.1.el7   docker://19.3.8
  • 树莓派 k8s 集群已安装helm和nginx-ingress
[root@pi4-master01 ~]# helm version
version.BuildInfo{Version:"v3.5.0", GitCommit:"32c22239423b3b4ba6706d450bd044baffdcf9e6", GitTreeState:"clean", GoVersion:"go1.15.6"}
[root@pi4-master01 ~]# helm list
NAME         	NAMESPACE	REVISION	UPDATED                                	STATUS  	CHART               	APP VERSION
ingress-nginx	default  	1       	2021-11-20 13:09:11.290130941 +0800 CST	deployed	ingress-nginx-3.39.0	0.49.3     
[root@pi4-master01 ~]# kubectl get svc
NAME                                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
ingress-nginx-controller             ClusterIP   10.109.127.239   <none>        80/TCP,443/TCP   4h25m
ingress-nginx-controller-admission   ClusterIP   10.110.214.58    <none>        443/TCP          4h25m
kubernetes                           ClusterIP   10.96.0.1        <none>        443/TCP          43h
  • 树莓派 k8s 集群已安装存储类,并设置为默认存储
[root@pi4-master01 ~]# kubectl get storageclass
NAME                   PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local-path (default)   rancher.io/local-path   Delete          WaitForFirstConsumer   false                  114m

安装文件

[root@pi4-master01 k8s] wget https://github.com/carlosedp/cluster-monitoring/archive/v0.39.0.tar.gz
[root@pi4-master01 k8s] tar -zxf v0.39.0.tar.gz && cd cluster-monitoring-0.39.0/

编译

环境

使用 make vendor 安装编译工具,首次执行会遇到一些坑,按照提示处理即可。

[root@pi4-master01 cluster-monitoring-0.39.0]# make vendor
-bash: make: 未找到命令
# 需要安装 make
[root@pi4-master01 cluster-monitoring-0.39.0]# yum install -y make

[root@pi4-master01 cluster-monitoring-0.39.0]# make vendor
make: go:命令未找到
make: go:命令未找到
make: go:命令未找到
Installing jsonnet-bundler
make: go:命令未找到
make: *** [/bin/jb] 错误 127
# 需要安装 go
[root@pi4-master01 cluster-monitoring-0.39.0]# cd ..
[root@pi4-master01 k8s]# wget https://studygolang.com/dl/golang/go1.17.3.linux-arm64.tar.gz
[root@pi4-master01 k8s]# tar -zxf go1.17.3.linux-arm64.tar.gz
[root@pi4-master01 k8s]# mv go /usr/local
# 将/usr/local/go/bin 路径放入环境变量PATH
[root@pi4-master01 k8s]# export PATH=$PATH:/usr/local/go/bin

# 继续
[root@pi4-master01 k8s]# cd cluster-monitoring-0.39.0
[root@pi4-master01 cluster-monitoring-0.39.0]# make vendor
Installing jsonnet-bundler
go get: module github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb: Get "https://proxy.golang.org/github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb/@v/list": dial tcp 142.251.42.241:443: i/o timeout
make: *** [/root/go/bin/jb] 错误 1
# 需要设置GOPROXY环境变量,将proxy.golang.org指向goproxy.cn
[root@pi4-master01 cluster-monitoring-0.39.0]# export GOPROXY=https://goproxy.cn,direct

# 继续
[root@pi4-master01 cluster-monitoring-0.39.0]# make vendor
Installing jsonnet-bundler
go: downloading github.com/jsonnet-bundler/jsonnet-bundler v0.4.0
go: downloading github.com/fatih/color v1.7.0
go: downloading github.com/pkg/errors v0.8.0
go: downloading gopkg.in/alecthomas/kingpin.v2 v2.2.6
go: downloading github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf
go: downloading github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc
go: downloading github.com/pkg/errors v0.9.1
go: downloading github.com/fatih/color v1.13.0
go: downloading github.com/mattn/go-isatty v0.0.6
go: downloading github.com/mattn/go-colorable v0.0.9
go: downloading github.com/mattn/go-isatty v0.0.14
go: downloading github.com/mattn/go-colorable v0.1.11
go: downloading golang.org/x/sys v0.0.0-20190310054646-10058d7d4faa
go: downloading github.com/alecthomas/units v0.0.0-20210927113745-59d0afb8317a
go: downloading github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751
go: downloading golang.org/x/sys v0.0.0-20211117180635-dee7805ff2e1
go get: installing executables with 'go get' in module mode is deprecated.
        Use 'go install pkg@version' instead.
        For more information, see https://golang.org/doc/go-get-install-deprecation
        or run 'go help get' or 'go help install'.
rm -rf vendor
/root/go/bin/jb install
GET https://github.com/coreos/kube-prometheus/archive/5a84ac52c7517a420b3bdf3cd251e8abce59a300.tar.gz 200
GET https://github.com/coreos/prometheus-operator/archive/d0a871b710de7b764c05ced98dbd1eb32a681790.tar.gz 200
GET https://github.com/ksonnet/ksonnet-lib/archive/0d2f82676817bbf9e4acf6495b2090205f323b9f.tar.gz 200
GET https://github.com/coreos/etcd/archive/747ff75c96df87530bcd8b6b02d1160c5500bf4e.tar.gz 200
GET https://github.com/kubernetes-monitoring/kubernetes-mixin/archive/2beabb38d3241eb5da5080cbeb648a0cd1e3cbc2.tar.gz 200
GET https://github.com/kubernetes/kube-state-metrics/archive/cce1e3309ab2f42953933e441cbb20b54d986551.tar.gz 200
GET https://github.com/kubernetes/kube-state-metrics/archive/cce1e3309ab2f42953933e441cbb20b54d986551.tar.gz 200
GET https://github.com/prometheus/node_exporter/archive/b9c96706a7425383902b6143d097cf6d7cfd1960.tar.gz 200
GET https://github.com/prometheus/prometheus/archive/c9565f08aa4dbd53164ec1b75ea401a70feb1506.tar.gz 200
GET https://github.com/brancz/kubernetes-grafana/archive/57b4365eacda291b82e0d55ba7eec573a8198dda.tar.gz 200
GET https://github.com/grafana/jsonnet-libs/archive/b9cc0f3529833096c043084c04bc7b3562a134c4.tar.gz 200
GET https://github.com/grafana/grafonnet-lib/archive/5736b62831d779e28a8344646aee1f72b1fa1d90.tar.gz 200

编译

使用 make 编译

root@pi4-master01:~/k8s/cluster-monitoring-0.39.0# make
Installing jsonnet
go: downloading github.com/google/go-jsonnet v0.17.0
go: downloading github.com/fatih/color v1.9.0
go: downloading github.com/mattn/go-isatty v0.0.11
go: downloading github.com/mattn/go-colorable v0.1.4
go: downloading golang.org/x/sys v0.0.0-20191026070338-33540a1f6037
go get: installing executables with 'go get' in module mode is deprecated.
        Use 'go install pkg@version' instead.
        For more information, see https://golang.org/doc/go-get-install-deprecation
        or run 'go help get' or 'go help install'.
go get: installing executables with 'go get' in module mode is deprecated.
        Use 'go install pkg@version' instead.
        For more information, see https://golang.org/doc/go-get-install-deprecation
        or run 'go help get' or 'go help install'.
go: downloading github.com/brancz/gojsontoyaml v0.1.0
go: downloading gopkg.in/yaml.v2 v2.4.0
go: downloading github.com/ghodss/yaml v1.0.0
go get: installing executables with 'go get' in module mode is deprecated.
        Use 'go install pkg@version' instead.
        For more information, see https://golang.org/doc/go-get-install-deprecation
        or run 'go help get' or 'go help install'.
rm -rf manifests
./scripts/build.sh main.jsonnet /root/go/bin/jsonnet
using jsonnet from arg
+ set -o pipefail
+ rm -rf manifests
+ mkdir -p manifests/setup
+ /root/go/bin/jsonnet -J vendor -m manifests main.jsonnet
+ xargs '-I{}' sh -c 'cat {} | $(go env GOPATH)/bin/gojsontoyaml > {}.yaml; rm -f {}' -- '{}'

可能遇到的问题

在环境准备阶段,有可能会因为网络原因遇到一些问题导致环境准备失败,需要手动解决,可能的问题和解决方案参考如下

[root@pi4-master01 cluster-monitoring-0.39.0]# make vendor
…………
jb: error: failed to install packages: downloading: exec: "git": executable file not found in $PATH
make: *** [vendor] 错误 1
# 需要安装 git(这个错误是因为get方式执行有问题,所以才会切换成git方式,如果网络环境好,也可以不需要git,视情况而定)
[root@pi4-master01 cluster-monitoring-0.39.0]# yum install -y git

# 继续
[root@pi4-master01 cluster-monitoring-0.39.0]# make vendor
…………
/root/go/bin/jb install
make: /root/go/bin/jb:命令未找到
make: *** [vendor] 错误 127

[root@pi4-master01 cluster-monitoring-0.39.0]# mkdir -p /root/go/bin/
[root@pi4-master01 cluster-monitoring-0.39.0]# cp /usr/local/go/bin/jb /root/go/bin/jb
[root@pi4-master01 cluster-monitoring-0.39.0]# make vendor
…………
--: /root/go/bin/gojsontoyaml: 没有那个文件或目录

[root@pi4-master01 cluster-monitoring-0.39.0]# mkdir -p /root/go/bin/
[root@pi4-master01 cluster-monitoring-0.39.0]# cp /usr/local/go/bin/gojsontoyaml /root/go/bin/

安装

先使用 kubectl apply -f manifests/setup/安装 crd,再使用 kubectl apply -f manifests 安装 prometheus

[root@pi4-master01 cluster-monitoring-0.39.0]# kubectl apply -f manifests/setup/
namespace/monitoring created
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com created
clusterrole.rbac.authorization.k8s.io/prometheus-operator created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created
deployment.apps/prometheus-operator created
service/prometheus-operator created
serviceaccount/prometheus-operator created

[root@pi4-master01 cluster-monitoring-0.39.0]#  kubectl apply -f manifests
alertmanager.monitoring.coreos.com/main created
secret/alertmanager-main created
service/alertmanager-main created
serviceaccount/alertmanager-main created
servicemonitor.monitoring.coreos.com/alertmanager created
secret/grafana-config created
secret/grafana-datasources created
configmap/grafana-dashboard-apiserver created
configmap/grafana-dashboard-cluster-total created
configmap/grafana-dashboard-controller-manager created
configmap/grafana-dashboard-coredns-dashboard created
configmap/grafana-dashboard-k8s-resources-cluster created
configmap/grafana-dashboard-k8s-resources-namespace created
configmap/grafana-dashboard-k8s-resources-node created
configmap/grafana-dashboard-k8s-resources-pod created
configmap/grafana-dashboard-k8s-resources-workload created
configmap/grafana-dashboard-k8s-resources-workloads-namespace created
configmap/grafana-dashboard-kubelet created
configmap/grafana-dashboard-kubernetes-cluster-dashboard created
configmap/grafana-dashboard-namespace-by-pod created
configmap/grafana-dashboard-namespace-by-workload created
configmap/grafana-dashboard-node-cluster-rsrc-use created
configmap/grafana-dashboard-node-rsrc-use created
configmap/grafana-dashboard-nodes created
configmap/grafana-dashboard-persistentvolumesusage created
configmap/grafana-dashboard-pod-total created
configmap/grafana-dashboard-prometheus-dashboard created
configmap/grafana-dashboard-prometheus-remote-write created
configmap/grafana-dashboard-prometheus created
configmap/grafana-dashboard-proxy created
configmap/grafana-dashboard-scheduler created
configmap/grafana-dashboard-statefulset created
configmap/grafana-dashboard-workload-total created
configmap/grafana-dashboards created
deployment.apps/grafana created
service/grafana created
serviceaccount/grafana created
servicemonitor.monitoring.coreos.com/grafana created
ingress.extensions/alertmanager-main created
ingress.extensions/grafana created
ingress.extensions/prometheus-k8s created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
deployment.apps/kube-state-metrics created
service/kube-state-metrics created
serviceaccount/kube-state-metrics created
servicemonitor.monitoring.coreos.com/kube-state-metrics created
clusterrole.rbac.authorization.k8s.io/node-exporter created
clusterrolebinding.rbac.authorization.k8s.io/node-exporter created
daemonset.apps/node-exporter created
service/node-exporter created
serviceaccount/node-exporter created
servicemonitor.monitoring.coreos.com/node-exporter created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
clusterrole.rbac.authorization.k8s.io/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter created
clusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator created
clusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources created
configmap/adapter-config created
deployment.apps/prometheus-adapter created
rolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader created
service/prometheus-adapter created
serviceaccount/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/prometheus-k8s created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s created
service/kube-controller-manager-prometheus-discovery created
service/kube-dns-prometheus-discovery created
service/kube-scheduler-prometheus-discovery created
servicemonitor.monitoring.coreos.com/prometheus-operator created
prometheus.monitoring.coreos.com/k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s-config created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
prometheusrule.monitoring.coreos.com/prometheus-k8s-rules created
service/prometheus-k8s created
serviceaccount/prometheus-k8s created
servicemonitor.monitoring.coreos.com/prometheus created
servicemonitor.monitoring.coreos.com/kube-apiserver created
servicemonitor.monitoring.coreos.com/coredns created
servicemonitor.monitoring.coreos.com/kube-controller-manager created
servicemonitor.monitoring.coreos.com/kube-scheduler created
servicemonitor.monitoring.coreos.com/kubelet created

使用

确认安装状态

[root@pi4-master01 cluster-monitoring-0.39.0]# kubectl get po -n monitoring
NAME                                   READY   STATUS    RESTARTS   AGE
alertmanager-main-0                    2/2     Running   0          3m3s
grafana-676bcb5687-j69wd               1/1     Running   0          3m1s
kube-state-metrics-96bf99844-fvzwq     3/3     Running   0          3m
node-exporter-2xpnn                    2/2     Running   0          2m59s
node-exporter-7hwcd                    2/2     Running   0          2m59s
node-exporter-vzh66                    2/2     Running   0          2m59s
prometheus-adapter-f78c4f4ff-6sscd     1/1     Running   0          2m58s
prometheus-k8s-0                       3/3     Running   1          2m56s
prometheus-operator-6b8868d698-rrnft   2/2     Running   0          4m41s
[root@pi4-master01 cluster-monitoring-0.39.0]# kubectl get svc -n monitoring
NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
alertmanager-main       ClusterIP   10.111.213.151   <none>        9093/TCP                     3m44s
alertmanager-operated   ClusterIP   None             <none>        9093/TCP,9094/TCP,9094/UDP   3m44s
grafana                 ClusterIP   10.102.149.77    <none>        3000/TCP                     3m42s
kube-state-metrics      ClusterIP   None             <none>        8443/TCP,9443/TCP            3m41s
node-exporter           ClusterIP   None             <none>        9100/TCP                     3m40s
prometheus-adapter      ClusterIP   10.104.6.226     <none>        443/TCP                      3m40s
prometheus-k8s          ClusterIP   10.105.86.97     <none>        9090/TCP                     3m37s
prometheus-operated     ClusterIP   None             <none>        9090/TCP                     3m38s
prometheus-operator     ClusterIP   None             <none>        8443/TCP                     5m23s
[root@pi4-master01 cluster-monitoring-0.39.0]# kubectl get ingress -n monitoring
NAME                CLASS    HOSTS                              ADDRESS          PORTS     AGE
alertmanager-main   <none>   alertmanager.192.168.15.15.nip.io   10.109.127.239   80, 443   4m
grafana             <none>   grafana.192.168.15.15.nip.io        10.109.127.239   80, 443   3m59s
prometheus-k8s      <none>   prometheus.192.168.15.15.nip.io     10.109.127.239   80, 443   3m59s

访问系统

准备

访问系统前要处理如下三个问题

  • ip 域名映射处理
    在访问 prometheus、grafana 和 alertmanage 系统之前,我们需要在使用客户端访问电脑建立 ip 和相关域名的映射关系如下:
10.168.1.101 alertmanager.192.168.15.15.nip.io grafana.192.168.15.15.nip.io prometheus.192.168.15.15.nip.io

提示: win10 或 win7 电脑中维护 ip 和相关域名的文件位置在 C:\Windows\System32\drivers\etc\hosts。

  • 安全风险处理 因为默认安装的 prometheus、grafana 和 alertmanage 系统只能通过 https 访问,且默认的服务器证书是自签证书,用浏览器首次访问会报安全风险,我们可以选择忽略,直接访问即可。

如访问prometheus

点按钮“高级”,高级按钮变“隐藏详情”,且按钮下方提示“继续前往 prometheus.192.168.15.15.nip.io(不安全)”

该提示是个链接,可以直接点击,进入prometheus系统。
访问grafana 和 alertmanage也会有安全风险提示,同访问prometheus一致即可。

prometheus

https://prometheus.192.168.15.15.nip.io

grafana

https://grafana.192.168.15.15.nip.io

默认用户名和密码都是 admin,首次登录成功后提示修改密码

进入 grafana 首页了

从首页我们点击左边栏的 Dashboards 按钮,显示界面如下

点"Manage"菜单,显示如下

点开上图“Default”文件夹,我们可以看到已经内置了很多面板如下,感兴趣的可以点开看看

默认面板中有一个Kubernetes cluster monitoring (via Prometheus), 这个面板可以全面监控树莓派 k8s 集群的当前状态如下

alertmanager

https://alertmanager.192.168.15.15.nip.io/

其它特性

到这里,我们的监控 prometheus 系统就算是部署成功了,当然,这个监控目前只是可用而已,我们可以让它变得更好用一些,下面介绍这个开源项目可以改造的几点,供参考。

增加对温度的监控

大家可以回顾下 grafana 系统下 Kubernetes cluster monitoring (via Prometheus)面板的这个位置

默认情况下,是没有温度监控的,我们需要额外处理下,这个需要修改项目下的 vars.jsonnet。

找到 armExporter 下对应的 enabled,由 false 修改为 true 后,再重新 make 和 kubectl apply -f manifests 即可。
过 5 分钟左右,再看下 Kubernetes cluster monitoring (via Prometheus)面板的这个位置,我们就会发现温度数据已经开始采集了。

配置域名

默认情况下,prometheus、grafana 和 alertmanage 配置的主域名是 192.168.1.15.nip.io,这个域名也是可以修改的,方法也是修改项目下的 vars.jsonnet。
将 suffixDomain 对应的 value 改成我们希望的域名,比如 pi4k8s.com,再重新 make 和 kubectl apply -f manifests 即可。执行完后,再看效果,域名已经变更过来了。

[root@pi4-master01 cluster-monitoring-0.40.0]# kubectl get ingress -n monitoring
NAME                CLASS    HOSTS                     ADDRESS          PORTS     AGE
alertmanager-main   <none>   alertmanager.pi4k8s.com   10.109.127.239   80, 443   3h21m
grafana             <none>   grafana.pi4k8s.com        10.109.127.239   80, 443   3h21m
prometheus-k8s      <none>   prometheus.pi4k8s.com     10.109.127.239   80, 443   3h21m

由 https 切换成 http 访问

默认情况下,prometheus、grafana 和 alertmanage 系统只能支持 https 访问,这个也可以切换成 http 访问,方法同样也是修改项目下的 vars.jsonnet。

将 TLSingress 对应的 value 修改成 false,再重新 make 和 kubectl apply -f manifests 即可。执行完后,再看效果,443 端口的 ingress 已经没有了。

[root@pi4-master01 cluster-monitoring-0.40.0]# kubectl get ingress -n monitoring
NAME                CLASS    HOSTS                     ADDRESS          PORTS   AGE
alertmanager-main   <none>   alertmanager.pi4k8s.com   10.109.127.239   80      3h39m
grafana             <none>   grafana.pi4k8s.com        10.109.127.239   80      3h39m
prometheus-k8s      <none>   prometheus.pi4k8s.com     10.109.127.239   80      3h39m

支持存储

默认情况下,prometheus、grafana 系统并不支持存储,pod 一旦重建,相关用户信息、面板和监控数据就都没有了,如果需要支持存储的话,方法同样也是修改项目下的 vars.jsonnet。

将 enablePersistence 下的 prometheus 和 grafana 对应的 value 都改成 true,然后再重新 make 和 kubectl apply -f manifests 即可。执行完后,再看效果,新增了相关 pvc 和 pv。

root@pi4-master01:~# kubectl get pv,pvc -n monitoring
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                           STORAGECLASS   REASON   AGE
persistentvolume/pvc-4a17517f-f063-436c-94a4-01c35f925353   2Gi        RWO            Delete           Bound    monitoring/grafana-storage                      local-path              7s
persistentvolume/pvc-ea070bfd-d7f7-4692-b669-fea32fd17698   20Gi       RWO            Delete           Bound    monitoring/prometheus-k8s-db-prometheus-k8s-0   local-path              11s

NAME                                                       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/grafana-storage                      Bound    pvc-4a17517f-f063-436c-94a4-01c35f925353   2Gi        RWO            local-path     27s
persistentvolumeclaim/prometheus-k8s-db-prometheus-k8s-0   Bound    pvc-ea070bfd-d7f7-4692-b669-fea32fd17698   20Gi       RWO            local-path     15s

总结

k8s 集群部署了 prometheus 监控,就像给我们开发者安装了火眼金睛,从此 k8s 集群所有的服务器节点状态、包括 CPU、内存、存储资源、网络使用情况全都可以一览无遗;另外,我们除了可以实时监控所有服务器节点运行的 pod、job 状态,还能够基于 prometheus 进行扩展,实现更多的监控场景。

如何正确使用树莓派 k8s 集群,让我们先从安装监控系统 prometheus 开始。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章