基於現有的Kubernetes集羣部署Prometheus監控

前言:

Prometheus官網提供了很多安裝方式https://prometheus.io/docs/prometheus/latest/installation/

在這裏我選擇通過Kube-Prometheus Stack技術站棧進行安裝

Kube-Prometheus項目地址:https://github.com/prometheus-operator/kube-prometheus/

Prometheus架構圖

 

組件解析:

  • Prometheus Server: prometheus生態最重要的組件,主要用於抓取和存儲時間序列數據,同時提供數據的查詢和告警策略的配置管理
  • Alertmanager: Prometheus生態用於告警的組件,Prometheus Server會將告警發送AlertManager,隨後Alertmanager根據路由配置,將告警信息發送給指定的人或者組,AlertManager支持郵件、webhook、微信、釘釘、短信等媒介進行告警通知
  • Grafana: 用於展示數據,便於數據的查詢和觀測
  • Push Gateway:Prometheus本身是通過Pull的方式拉取數據,但有些數據是短期的,如果沒有采集數據可能會出現丟失,Push Gateway可以用於解決此類問題。可以用來數據接收,也就是客戶端可以通過Push的方式將數據推送到Push Gateway,之後Prometheus可以通過Pull拉取該數據
  •  Exportes: 主要用來採集監控數據,比如主機的監控數據可可以通過node_exportes採集,MySql的監控數據可以通過mysql_exportes採集,之後Exportes 暴露一個接口,比如/metrics, Prometheus可以通過該接口採集到數據
  • PromQL: promQL其實不算Prometheus的組建,它是用來查詢數據的一種語法,比如查詢數據庫的數據,可以通過SQL數據,查詢Loki的數據,可以通過LogQL,查詢Prometheus的數據叫PromQL
  • Service Discovery:用來發現監控目標的自動發現,常用的有基於Kubernetes、Consul、Eureka、文件的自動發現等

通過該地址找到與自己Kubernetes版本相對應的Kube Prometheus Satck版本

 克隆kube-prometheus倉庫

[root@k8s-master01 prometheus]# git clone -b release-0.8 https://github.com/prometheus-operator/kube-prometheus.git
正克隆到 'kube-prometheus'...
^[remote: Enumerating objects: 16466, done.
remote: Counting objects: 100% (387/387), done.
remote: Compressing objects: 100% (149/149), done.
remote: Total 16466 (delta 272), reused 292 (delta 222), pack-reused 16079
接收對象中: 100% (16466/16466), 8.22 MiB | 86.00 KiB/s, done.
處理 delta 中: 100% (10602/10602), done.
[root@k8s-master01 prometheus]# cd kube-prometheus/manifests/
[root@k8s-master01 manifests]# kubectl apply -f setup/
namespace/monitoring created
customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com configured
Warning: resource customresourcedefinitions/prometheuses.monitoring.coreos.com is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com configured
clusterrole.rbac.authorization.k8s.io/prometheus-operator created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created
deployment.apps/prometheus-operator created
service/prometheus-operator created
serviceaccount/prometheus-operator created

查看Operator容器狀態

 

 Operator容器啓動之後,接下來安裝Prometheus Stack技術棧

[root@k8s-master01 manifests]# vim  alertmanager-alertmanager.yaml

apiVersion: monitoring.coreos.com/v1
kind: Alertmanager
metadata:
  labels:
    alertmanager: main
    app.kubernetes.io/component: alert-router
    app.kubernetes.io/name: alertmanager
    app.kubernetes.io/part-of: kube-prometheus
    app.kubernetes.io/version: 0.21.0
  name: main
  namespace: monitoring
spec:
  image: quay.io/prometheus/alertmanager:v0.21.0
  nodeSelector:
    kubernetes.io/os: linux
  podMetadata:
    labels:
      app.kubernetes.io/component: alert-router
      app.kubernetes.io/name: alertmanager
      app.kubernetes.io/part-of: kube-prometheus
      app.kubernetes.io/version: 0.21.0
  replicas: 1 #這裏只爲做演示使用,減少系統資源消耗,我們將該副本數改成1,也就是單節點,在生產環境中我們可以設置爲3
  resources:
    limits:
      cpu: 100m
      memory: 100Mi
    requests:
      cpu: 4m
      memory: 100Mi
  securityContext:
    fsGroup: 2000
    runAsNonRoot: true
    runAsUser: 1000
  serviceAccountName: alertmanager-main
  version: 0.21.0
[root@k8s-master01 manifests]# vim  prometheus-prometheus.yaml
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  labels:
    app.kubernetes.io/component: prometheus
    app.kubernetes.io/name: prometheus
    app.kubernetes.io/part-of: kube-prometheus
    app.kubernetes.io/version: 2.26.0
    prometheus: k8s
  name: k8s
  namespace: monitoring
spec:
  alerting:  #告警配置
    alertmanagers:
    - apiVersion: v2
      name: alertmanager-main  #alertmanagers的service名稱
      namespace: monitoring
      port: web
  externalLabels: {}
  image: quay.io/prometheus/prometheus:v2.26.0
  nodeSelector:
    kubernetes.io/os: linux
  podMetadata:
    labels:
      app.kubernetes.io/component: prometheus
      app.kubernetes.io/name: prometheus
      app.kubernetes.io/part-of: kube-prometheus
      app.kubernetes.io/version: 2.26.0
  podMonitorNamespaceSelector: {}
  podMonitorSelector: {}
  probeNamespaceSelector: {}
  probeSelector: {}
  replicas: 2 #高可用節點,默認設置爲2
  resources:
    requests:
      memory: 400Mi
  ruleSelector:
    matchLabels:
      prometheus: k8s
      role: alert-rules
  securityContext:
    fsGroup: 2000
    runAsNonRoot: true
    runAsUser: 1000
  serviceAccountName: prometheus-k8s
  serviceMonitorNamespaceSelector: {}
  serviceMonitorSelector: {}
  version: 2.26.0

修改metrics鏡像源地址

[root@k8s-master01 manifests]# vim  kube-state-metrics-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/component: exporter
    app.kubernetes.io/name: kube-state-metrics
    app.kubernetes.io/part-of: kube-prometheus
    app.kubernetes.io/version: 2.0.0
  name: kube-state-metrics
  namespace: monitoring
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/component: exporter
      app.kubernetes.io/name: kube-state-metrics
      app.kubernetes.io/part-of: kube-prometheus
  template:
    metadata:
      annotations:
        kubectl.kubernetes.io/default-container: kube-state-metrics
      labels:
        app.kubernetes.io/component: exporter
        app.kubernetes.io/name: kube-state-metrics
        app.kubernetes.io/part-of: kube-prometheus
        app.kubernetes.io/version: 2.0.0
    spec:
      containers:
      - args:
        - --host=127.0.0.1
        - --port=8081
        - --telemetry-host=127.0.0.1
        - --telemetry-port=8082
        #image: k8s.gcr.io/kube-state-metrics/kube-state-metrics:v2.0.0
        image: bitnami/kube-state-metrics:2.0.0 #因爲默認是gcr國外鏡像,一般情況下是無法正常下載的,因此我們需要在hub.docker鏡像官網上進行相應版本下載
        name: kube-state-metrics
        resources:
          limits:
            cpu: 100m
            memory: 250Mi
          requests:
            cpu: 10m
            memory: 190Mi
        securityContext:
          runAsUser: 65534
      - args:
        - --logtostderr
        - --secure-listen-address=:8443
        - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH
_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
        - --upstream=http://127.0.0.1:8081/
        image: quay.io/brancz/kube-rbac-proxy:v0.8.0
        name: kube-rbac-proxy-main
        ports:
        - containerPort: 8443
          name: https-main
        resources:
          limits:
            cpu: 40m
            memory: 40Mi
          requests:
            cpu: 20m
            memory: 20Mi
        securityContext:
          runAsGroup: 65532
          runAsNonRoot: true
          runAsUser: 65532
      - args:
        - --logtostderr
        - --secure-listen-address=:9443
        - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
        - --upstream=http://127.0.0.1:8082/
        image: quay.io/brancz/kube-rbac-proxy:v0.8.0
        name: kube-rbac-proxy-self
        ports:
        - containerPort: 9443
          name: https-self
        resources:
          limits:
            cpu: 20m
            memory: 40Mi
          requests:
            cpu: 10m
            memory: 20Mi
        securityContext:
          runAsGroup: 65532
          runAsNonRoot: true
          runAsUser: 65532
      nodeSelector:
        kubernetes.io/os: linux
      serviceAccountName: kube-state-metrics

#獲取kube-state-metrics鏡像

https://hub.docker.com/r/bitnami/kube-state-metrics/tags?page=1&name=2.0.0

相關yaml配置調整之後,直接apply創建即可

[root@k8s-master01 manifests]# kubectl get -f .
[root@k8s-master01 manifests]# kubectl get pod -n monitoring 

[root@k8s-master01 manifests]# kubectl get  svc -n monitoring #查看monitoring命名空間下的Grafana service

[root@k8s-master01 manifests]# kubectl edit svc grafana -n monitoring  #通過edit修改成NodePort模式,允許外部訪問

root@k8s-master01 manifests]# kubectl get  svc -n monitoring #修改完畢之後查看grafana service 所暴露出來的隨機IP地址,便於測試訪問
NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
alertmanager-main       ClusterIP   192.168.240.79   <none>        9093/TCP                     15h
alertmanager-operated   ClusterIP   None             <none>        9093/TCP,9094/TCP,9094/UDP   15h
blackbox-exporter       ClusterIP   192.168.72.138   <none>        9115/TCP,19115/TCP           15h
grafana                 NodePort    192.168.89.66    <none>        3000:32621/TCP               15h
kube-state-metrics      ClusterIP   None             <none>        8443/TCP,9443/TCP            15h
node-exporter           ClusterIP   None             <none>        9100/TCP                     15h
prometheus-adapter      ClusterIP   192.168.244.57   <none>        443/TCP                      15h
prometheus-k8s          ClusterIP   192.168.157.67   <none>        9090/TCP                     15h
prometheus-operated     ClusterIP   None             <none>        9090/TCP                     15h
prometheus-operator     ClusterIP   None             <none>        8443/TCP                     16h

 之後可以通過一個安裝了kube-proxy服務節點IP(或者keepalived+Haproxy的VIP)+32621端口即可訪問到Grafana頁面

Grafana默認登錄的賬號密碼爲admin/admin。然後相同的方式更改Prometheus的Service爲 NodePort:

[root@k8s-master01 manifests]# kubectl edit svc prometheus-k8s -n monitoring
service/prometheus-k8s edited 

 

[root@k8s-master01 manifests]# kubectl get svc -n monitoring #查看prometheus所暴露的出來的外部訪問端口
NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
alertmanager-main       ClusterIP   192.168.240.79   <none>        9093/TCP                     16h
alertmanager-operated   ClusterIP   None             <none>        9093/TCP,9094/TCP,9094/UDP   16h
blackbox-exporter       ClusterIP   192.168.72.138   <none>        9115/TCP,19115/TCP           16h
grafana                 NodePort    192.168.89.66    <none>        3000:32621/TCP               16h
kube-state-metrics      ClusterIP   None             <none>        8443/TCP,9443/TCP            16h
node-exporter           ClusterIP   None             <none>        9100/TCP                     16h
prometheus-adapter      ClusterIP   192.168.244.57   <none>        443/TCP                      16h
prometheus-k8s          NodePort    192.168.157.67   <none>        9090:30311/TCP               16h
prometheus-operated     ClusterIP   None             <none>        9090/TCP                     16h
prometheus-operator     ClusterIP   None             <none>        8443/TCP                     16h

點擊Alerts部分出現告警信息暫且忽略。後續可調試 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章