樹莓派 k8s 集羣監控 Prometheus(centos)

k8s 集羣上的監控首推 prometheus,但如果按照 x86 架構 k8s 集羣安裝 prometheus 的方法直接在樹莓派 k8s 集羣上安裝 prometheus,適配的工作量比較大,不建議這麼做。我推薦 github 上一個大神的作品https://github.com/carlosedp/cluster-monitoring,經反覆驗證,在樹莓派 k8s 集羣基本可用,這裏我簡單介紹一下安裝過程。

準備

環境

  • 樹莓派 k8s 集羣:最好 3 節點,單節點也可以
[root@pi4-master01 ~]# kubectl get nodes -o wide
NAME           STATUS   ROLES    AGE     VERSION    INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                   KERNEL-VERSION    CONTAINER-RUNTIME
pi4-master01   Ready    master   43h     v1.18.20   10.168.1.101   <none>        CentOS Linux 7 (AltArch)   5.4.72-v8.1.el7   docker://19.3.8
pi4-node01     Ready    node     43h     v1.18.20   10.168.1.102   <none>        CentOS Linux 7 (AltArch)   5.4.72-v8.1.el7   docker://19.3.8
pi4-node02     Ready    node     5h44m   v1.18.20   10.168.1.103   <none>        CentOS Linux 7 (AltArch)   5.4.72-v8.1.el7   docker://19.3.8
  • 樹莓派 k8s 集羣已安裝helm和nginx-ingress
[root@pi4-master01 ~]# helm version
version.BuildInfo{Version:"v3.5.0", GitCommit:"32c22239423b3b4ba6706d450bd044baffdcf9e6", GitTreeState:"clean", GoVersion:"go1.15.6"}
[root@pi4-master01 ~]# helm list
NAME         	NAMESPACE	REVISION	UPDATED                                	STATUS  	CHART               	APP VERSION
ingress-nginx	default  	1       	2021-11-20 13:09:11.290130941 +0800 CST	deployed	ingress-nginx-3.39.0	0.49.3     
[root@pi4-master01 ~]# kubectl get svc
NAME                                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
ingress-nginx-controller             ClusterIP   10.109.127.239   <none>        80/TCP,443/TCP   4h25m
ingress-nginx-controller-admission   ClusterIP   10.110.214.58    <none>        443/TCP          4h25m
kubernetes                           ClusterIP   10.96.0.1        <none>        443/TCP          43h
  • 樹莓派 k8s 集羣已安裝存儲類,並設置爲默認存儲
[root@pi4-master01 ~]# kubectl get storageclass
NAME                   PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local-path (default)   rancher.io/local-path   Delete          WaitForFirstConsumer   false                  114m

安裝文件

[root@pi4-master01 k8s] wget https://github.com/carlosedp/cluster-monitoring/archive/v0.39.0.tar.gz
[root@pi4-master01 k8s] tar -zxf v0.39.0.tar.gz && cd cluster-monitoring-0.39.0/

編譯

環境

使用 make vendor 安裝編譯工具,首次執行會遇到一些坑,按照提示處理即可。

[root@pi4-master01 cluster-monitoring-0.39.0]# make vendor
-bash: make: 未找到命令
# 需要安裝 make
[root@pi4-master01 cluster-monitoring-0.39.0]# yum install -y make

[root@pi4-master01 cluster-monitoring-0.39.0]# make vendor
make: go:命令未找到
make: go:命令未找到
make: go:命令未找到
Installing jsonnet-bundler
make: go:命令未找到
make: *** [/bin/jb] 錯誤 127
# 需要安裝 go
[root@pi4-master01 cluster-monitoring-0.39.0]# cd ..
[root@pi4-master01 k8s]# wget https://studygolang.com/dl/golang/go1.17.3.linux-arm64.tar.gz
[root@pi4-master01 k8s]# tar -zxf go1.17.3.linux-arm64.tar.gz
[root@pi4-master01 k8s]# mv go /usr/local
# 將/usr/local/go/bin 路徑放入環境變量PATH
[root@pi4-master01 k8s]# export PATH=$PATH:/usr/local/go/bin

# 繼續
[root@pi4-master01 k8s]# cd cluster-monitoring-0.39.0
[root@pi4-master01 cluster-monitoring-0.39.0]# make vendor
Installing jsonnet-bundler
go get: module github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb: Get "https://proxy.golang.org/github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb/@v/list": dial tcp 142.251.42.241:443: i/o timeout
make: *** [/root/go/bin/jb] 錯誤 1
# 需要設置GOPROXY環境變量,將proxy.golang.org指向goproxy.cn
[root@pi4-master01 cluster-monitoring-0.39.0]# export GOPROXY=https://goproxy.cn,direct

# 繼續
[root@pi4-master01 cluster-monitoring-0.39.0]# make vendor
Installing jsonnet-bundler
go: downloading github.com/jsonnet-bundler/jsonnet-bundler v0.4.0
go: downloading github.com/fatih/color v1.7.0
go: downloading github.com/pkg/errors v0.8.0
go: downloading gopkg.in/alecthomas/kingpin.v2 v2.2.6
go: downloading github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf
go: downloading github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc
go: downloading github.com/pkg/errors v0.9.1
go: downloading github.com/fatih/color v1.13.0
go: downloading github.com/mattn/go-isatty v0.0.6
go: downloading github.com/mattn/go-colorable v0.0.9
go: downloading github.com/mattn/go-isatty v0.0.14
go: downloading github.com/mattn/go-colorable v0.1.11
go: downloading golang.org/x/sys v0.0.0-20190310054646-10058d7d4faa
go: downloading github.com/alecthomas/units v0.0.0-20210927113745-59d0afb8317a
go: downloading github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751
go: downloading golang.org/x/sys v0.0.0-20211117180635-dee7805ff2e1
go get: installing executables with 'go get' in module mode is deprecated.
        Use 'go install pkg@version' instead.
        For more information, see https://golang.org/doc/go-get-install-deprecation
        or run 'go help get' or 'go help install'.
rm -rf vendor
/root/go/bin/jb install
GET https://github.com/coreos/kube-prometheus/archive/5a84ac52c7517a420b3bdf3cd251e8abce59a300.tar.gz 200
GET https://github.com/coreos/prometheus-operator/archive/d0a871b710de7b764c05ced98dbd1eb32a681790.tar.gz 200
GET https://github.com/ksonnet/ksonnet-lib/archive/0d2f82676817bbf9e4acf6495b2090205f323b9f.tar.gz 200
GET https://github.com/coreos/etcd/archive/747ff75c96df87530bcd8b6b02d1160c5500bf4e.tar.gz 200
GET https://github.com/kubernetes-monitoring/kubernetes-mixin/archive/2beabb38d3241eb5da5080cbeb648a0cd1e3cbc2.tar.gz 200
GET https://github.com/kubernetes/kube-state-metrics/archive/cce1e3309ab2f42953933e441cbb20b54d986551.tar.gz 200
GET https://github.com/kubernetes/kube-state-metrics/archive/cce1e3309ab2f42953933e441cbb20b54d986551.tar.gz 200
GET https://github.com/prometheus/node_exporter/archive/b9c96706a7425383902b6143d097cf6d7cfd1960.tar.gz 200
GET https://github.com/prometheus/prometheus/archive/c9565f08aa4dbd53164ec1b75ea401a70feb1506.tar.gz 200
GET https://github.com/brancz/kubernetes-grafana/archive/57b4365eacda291b82e0d55ba7eec573a8198dda.tar.gz 200
GET https://github.com/grafana/jsonnet-libs/archive/b9cc0f3529833096c043084c04bc7b3562a134c4.tar.gz 200
GET https://github.com/grafana/grafonnet-lib/archive/5736b62831d779e28a8344646aee1f72b1fa1d90.tar.gz 200

編譯

使用 make 編譯

root@pi4-master01:~/k8s/cluster-monitoring-0.39.0# make
Installing jsonnet
go: downloading github.com/google/go-jsonnet v0.17.0
go: downloading github.com/fatih/color v1.9.0
go: downloading github.com/mattn/go-isatty v0.0.11
go: downloading github.com/mattn/go-colorable v0.1.4
go: downloading golang.org/x/sys v0.0.0-20191026070338-33540a1f6037
go get: installing executables with 'go get' in module mode is deprecated.
        Use 'go install pkg@version' instead.
        For more information, see https://golang.org/doc/go-get-install-deprecation
        or run 'go help get' or 'go help install'.
go get: installing executables with 'go get' in module mode is deprecated.
        Use 'go install pkg@version' instead.
        For more information, see https://golang.org/doc/go-get-install-deprecation
        or run 'go help get' or 'go help install'.
go: downloading github.com/brancz/gojsontoyaml v0.1.0
go: downloading gopkg.in/yaml.v2 v2.4.0
go: downloading github.com/ghodss/yaml v1.0.0
go get: installing executables with 'go get' in module mode is deprecated.
        Use 'go install pkg@version' instead.
        For more information, see https://golang.org/doc/go-get-install-deprecation
        or run 'go help get' or 'go help install'.
rm -rf manifests
./scripts/build.sh main.jsonnet /root/go/bin/jsonnet
using jsonnet from arg
+ set -o pipefail
+ rm -rf manifests
+ mkdir -p manifests/setup
+ /root/go/bin/jsonnet -J vendor -m manifests main.jsonnet
+ xargs '-I{}' sh -c 'cat {} | $(go env GOPATH)/bin/gojsontoyaml > {}.yaml; rm -f {}' -- '{}'

可能遇到的問題

在環境準備階段,有可能會因爲網絡原因遇到一些問題導致環境準備失敗,需要手動解決,可能的問題和解決方案參考如下

[root@pi4-master01 cluster-monitoring-0.39.0]# make vendor
…………
jb: error: failed to install packages: downloading: exec: "git": executable file not found in $PATH
make: *** [vendor] 錯誤 1
# 需要安裝 git(這個錯誤是因爲get方式執行有問題,所以纔會切換成git方式,如果網絡環境好,也可以不需要git,視情況而定)
[root@pi4-master01 cluster-monitoring-0.39.0]# yum install -y git

# 繼續
[root@pi4-master01 cluster-monitoring-0.39.0]# make vendor
…………
/root/go/bin/jb install
make: /root/go/bin/jb:命令未找到
make: *** [vendor] 錯誤 127

[root@pi4-master01 cluster-monitoring-0.39.0]# mkdir -p /root/go/bin/
[root@pi4-master01 cluster-monitoring-0.39.0]# cp /usr/local/go/bin/jb /root/go/bin/jb
[root@pi4-master01 cluster-monitoring-0.39.0]# make vendor
…………
--: /root/go/bin/gojsontoyaml: 沒有那個文件或目錄

[root@pi4-master01 cluster-monitoring-0.39.0]# mkdir -p /root/go/bin/
[root@pi4-master01 cluster-monitoring-0.39.0]# cp /usr/local/go/bin/gojsontoyaml /root/go/bin/

安裝

先使用 kubectl apply -f manifests/setup/安裝 crd,再使用 kubectl apply -f manifests 安裝 prometheus

[root@pi4-master01 cluster-monitoring-0.39.0]# kubectl apply -f manifests/setup/
namespace/monitoring created
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com created
clusterrole.rbac.authorization.k8s.io/prometheus-operator created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created
deployment.apps/prometheus-operator created
service/prometheus-operator created
serviceaccount/prometheus-operator created

[root@pi4-master01 cluster-monitoring-0.39.0]#  kubectl apply -f manifests
alertmanager.monitoring.coreos.com/main created
secret/alertmanager-main created
service/alertmanager-main created
serviceaccount/alertmanager-main created
servicemonitor.monitoring.coreos.com/alertmanager created
secret/grafana-config created
secret/grafana-datasources created
configmap/grafana-dashboard-apiserver created
configmap/grafana-dashboard-cluster-total created
configmap/grafana-dashboard-controller-manager created
configmap/grafana-dashboard-coredns-dashboard created
configmap/grafana-dashboard-k8s-resources-cluster created
configmap/grafana-dashboard-k8s-resources-namespace created
configmap/grafana-dashboard-k8s-resources-node created
configmap/grafana-dashboard-k8s-resources-pod created
configmap/grafana-dashboard-k8s-resources-workload created
configmap/grafana-dashboard-k8s-resources-workloads-namespace created
configmap/grafana-dashboard-kubelet created
configmap/grafana-dashboard-kubernetes-cluster-dashboard created
configmap/grafana-dashboard-namespace-by-pod created
configmap/grafana-dashboard-namespace-by-workload created
configmap/grafana-dashboard-node-cluster-rsrc-use created
configmap/grafana-dashboard-node-rsrc-use created
configmap/grafana-dashboard-nodes created
configmap/grafana-dashboard-persistentvolumesusage created
configmap/grafana-dashboard-pod-total created
configmap/grafana-dashboard-prometheus-dashboard created
configmap/grafana-dashboard-prometheus-remote-write created
configmap/grafana-dashboard-prometheus created
configmap/grafana-dashboard-proxy created
configmap/grafana-dashboard-scheduler created
configmap/grafana-dashboard-statefulset created
configmap/grafana-dashboard-workload-total created
configmap/grafana-dashboards created
deployment.apps/grafana created
service/grafana created
serviceaccount/grafana created
servicemonitor.monitoring.coreos.com/grafana created
ingress.extensions/alertmanager-main created
ingress.extensions/grafana created
ingress.extensions/prometheus-k8s created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
deployment.apps/kube-state-metrics created
service/kube-state-metrics created
serviceaccount/kube-state-metrics created
servicemonitor.monitoring.coreos.com/kube-state-metrics created
clusterrole.rbac.authorization.k8s.io/node-exporter created
clusterrolebinding.rbac.authorization.k8s.io/node-exporter created
daemonset.apps/node-exporter created
service/node-exporter created
serviceaccount/node-exporter created
servicemonitor.monitoring.coreos.com/node-exporter created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
clusterrole.rbac.authorization.k8s.io/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter created
clusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator created
clusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources created
configmap/adapter-config created
deployment.apps/prometheus-adapter created
rolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader created
service/prometheus-adapter created
serviceaccount/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/prometheus-k8s created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s created
service/kube-controller-manager-prometheus-discovery created
service/kube-dns-prometheus-discovery created
service/kube-scheduler-prometheus-discovery created
servicemonitor.monitoring.coreos.com/prometheus-operator created
prometheus.monitoring.coreos.com/k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s-config created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
prometheusrule.monitoring.coreos.com/prometheus-k8s-rules created
service/prometheus-k8s created
serviceaccount/prometheus-k8s created
servicemonitor.monitoring.coreos.com/prometheus created
servicemonitor.monitoring.coreos.com/kube-apiserver created
servicemonitor.monitoring.coreos.com/coredns created
servicemonitor.monitoring.coreos.com/kube-controller-manager created
servicemonitor.monitoring.coreos.com/kube-scheduler created
servicemonitor.monitoring.coreos.com/kubelet created

使用

確認安裝狀態

[root@pi4-master01 cluster-monitoring-0.39.0]# kubectl get po -n monitoring
NAME                                   READY   STATUS    RESTARTS   AGE
alertmanager-main-0                    2/2     Running   0          3m3s
grafana-676bcb5687-j69wd               1/1     Running   0          3m1s
kube-state-metrics-96bf99844-fvzwq     3/3     Running   0          3m
node-exporter-2xpnn                    2/2     Running   0          2m59s
node-exporter-7hwcd                    2/2     Running   0          2m59s
node-exporter-vzh66                    2/2     Running   0          2m59s
prometheus-adapter-f78c4f4ff-6sscd     1/1     Running   0          2m58s
prometheus-k8s-0                       3/3     Running   1          2m56s
prometheus-operator-6b8868d698-rrnft   2/2     Running   0          4m41s
[root@pi4-master01 cluster-monitoring-0.39.0]# kubectl get svc -n monitoring
NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
alertmanager-main       ClusterIP   10.111.213.151   <none>        9093/TCP                     3m44s
alertmanager-operated   ClusterIP   None             <none>        9093/TCP,9094/TCP,9094/UDP   3m44s
grafana                 ClusterIP   10.102.149.77    <none>        3000/TCP                     3m42s
kube-state-metrics      ClusterIP   None             <none>        8443/TCP,9443/TCP            3m41s
node-exporter           ClusterIP   None             <none>        9100/TCP                     3m40s
prometheus-adapter      ClusterIP   10.104.6.226     <none>        443/TCP                      3m40s
prometheus-k8s          ClusterIP   10.105.86.97     <none>        9090/TCP                     3m37s
prometheus-operated     ClusterIP   None             <none>        9090/TCP                     3m38s
prometheus-operator     ClusterIP   None             <none>        8443/TCP                     5m23s
[root@pi4-master01 cluster-monitoring-0.39.0]# kubectl get ingress -n monitoring
NAME                CLASS    HOSTS                              ADDRESS          PORTS     AGE
alertmanager-main   <none>   alertmanager.192.168.15.15.nip.io   10.109.127.239   80, 443   4m
grafana             <none>   grafana.192.168.15.15.nip.io        10.109.127.239   80, 443   3m59s
prometheus-k8s      <none>   prometheus.192.168.15.15.nip.io     10.109.127.239   80, 443   3m59s

訪問系統

準備

訪問系統前要處理如下三個問題

  • ip 域名映射處理
    在訪問 prometheus、grafana 和 alertmanage 系統之前,我們需要在使用客戶端訪問電腦建立 ip 和相關域名的映射關係如下:
10.168.1.101 alertmanager.192.168.15.15.nip.io grafana.192.168.15.15.nip.io prometheus.192.168.15.15.nip.io

提示: win10 或 win7 電腦中維護 ip 和相關域名的文件位置在 C:\Windows\System32\drivers\etc\hosts。

  • 安全風險處理 因爲默認安裝的 prometheus、grafana 和 alertmanage 系統只能通過 https 訪問,且默認的服務器證書是自簽證書,用瀏覽器首次訪問會報安全風險,我們可以選擇忽略,直接訪問即可。

如訪問prometheus

點按鈕“高級”,高級按鈕變“隱藏詳情”,且按鈕下方提示“繼續前往 prometheus.192.168.15.15.nip.io(不安全)”

該提示是個鏈接,可以直接點擊,進入prometheus系統。
訪問grafana 和 alertmanage也會有安全風險提示,同訪問prometheus一致即可。

prometheus

https://prometheus.192.168.15.15.nip.io

grafana

https://grafana.192.168.15.15.nip.io

默認用戶名和密碼都是 admin,首次登錄成功後提示修改密碼

進入 grafana 首頁了

從首頁我們點擊左邊欄的 Dashboards 按鈕,顯示界面如下

點"Manage"菜單,顯示如下

點開上圖“Default”文件夾,我們可以看到已經內置了很多面板如下,感興趣的可以點開看看

默認面板中有一個Kubernetes cluster monitoring (via Prometheus), 這個面板可以全面監控樹莓派 k8s 集羣的當前狀態如下

alertmanager

https://alertmanager.192.168.15.15.nip.io/

其它特性

到這裏,我們的監控 prometheus 系統就算是部署成功了,當然,這個監控目前只是可用而已,我們可以讓它變得更好用一些,下面介紹這個開源項目可以改造的幾點,供參考。

增加對溫度的監控

大家可以回顧下 grafana 系統下 Kubernetes cluster monitoring (via Prometheus)面板的這個位置

默認情況下,是沒有溫度監控的,我們需要額外處理下,這個需要修改項目下的 vars.jsonnet。

找到 armExporter 下對應的 enabled,由 false 修改爲 true 後,再重新 make 和 kubectl apply -f manifests 即可。
過 5 分鐘左右,再看下 Kubernetes cluster monitoring (via Prometheus)面板的這個位置,我們就會發現溫度數據已經開始採集了。

配置域名

默認情況下,prometheus、grafana 和 alertmanage 配置的主域名是 192.168.1.15.nip.io,這個域名也是可以修改的,方法也是修改項目下的 vars.jsonnet。
將 suffixDomain 對應的 value 改成我們希望的域名,比如 pi4k8s.com,再重新 make 和 kubectl apply -f manifests 即可。執行完後,再看效果,域名已經變更過來了。

[root@pi4-master01 cluster-monitoring-0.40.0]# kubectl get ingress -n monitoring
NAME                CLASS    HOSTS                     ADDRESS          PORTS     AGE
alertmanager-main   <none>   alertmanager.pi4k8s.com   10.109.127.239   80, 443   3h21m
grafana             <none>   grafana.pi4k8s.com        10.109.127.239   80, 443   3h21m
prometheus-k8s      <none>   prometheus.pi4k8s.com     10.109.127.239   80, 443   3h21m

由 https 切換成 http 訪問

默認情況下,prometheus、grafana 和 alertmanage 系統只能支持 https 訪問,這個也可以切換成 http 訪問,方法同樣也是修改項目下的 vars.jsonnet。

將 TLSingress 對應的 value 修改成 false,再重新 make 和 kubectl apply -f manifests 即可。執行完後,再看效果,443 端口的 ingress 已經沒有了。

[root@pi4-master01 cluster-monitoring-0.40.0]# kubectl get ingress -n monitoring
NAME                CLASS    HOSTS                     ADDRESS          PORTS   AGE
alertmanager-main   <none>   alertmanager.pi4k8s.com   10.109.127.239   80      3h39m
grafana             <none>   grafana.pi4k8s.com        10.109.127.239   80      3h39m
prometheus-k8s      <none>   prometheus.pi4k8s.com     10.109.127.239   80      3h39m

支持存儲

默認情況下,prometheus、grafana 系統並不支持存儲,pod 一旦重建,相關用戶信息、面板和監控數據就都沒有了,如果需要支持存儲的話,方法同樣也是修改項目下的 vars.jsonnet。

將 enablePersistence 下的 prometheus 和 grafana 對應的 value 都改成 true,然後再重新 make 和 kubectl apply -f manifests 即可。執行完後,再看效果,新增了相關 pvc 和 pv。

root@pi4-master01:~# kubectl get pv,pvc -n monitoring
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                           STORAGECLASS   REASON   AGE
persistentvolume/pvc-4a17517f-f063-436c-94a4-01c35f925353   2Gi        RWO            Delete           Bound    monitoring/grafana-storage                      local-path              7s
persistentvolume/pvc-ea070bfd-d7f7-4692-b669-fea32fd17698   20Gi       RWO            Delete           Bound    monitoring/prometheus-k8s-db-prometheus-k8s-0   local-path              11s

NAME                                                       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/grafana-storage                      Bound    pvc-4a17517f-f063-436c-94a4-01c35f925353   2Gi        RWO            local-path     27s
persistentvolumeclaim/prometheus-k8s-db-prometheus-k8s-0   Bound    pvc-ea070bfd-d7f7-4692-b669-fea32fd17698   20Gi       RWO            local-path     15s

總結

k8s 集羣部署了 prometheus 監控,就像給我們開發者安裝了火眼金睛,從此 k8s 集羣所有的服務器節點狀態、包括 CPU、內存、存儲資源、網絡使用情況全都可以一覽無遺;另外,我們除了可以實時監控所有服務器節點運行的 pod、job 狀態,還能夠基於 prometheus 進行擴展,實現更多的監控場景。

如何正確使用樹莓派 k8s 集羣,讓我們先從安裝監控系統 prometheus 開始。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章