K8s部署單節點Zookeeper並進行監控

0、寫在前面

  1> K8s監控Zookeeper,這裏並沒有使用zookeeper-exporter的方式進行監控,主要是由於zookeeper-exporter提供的相關指標不太全,zookeeper官網提供的監控指標信息可參看如下地址:https://github.com/apache/zookeeper/blob/master/zookeeper-server/src/main/java/org/apache/zookeeper/server/ServerMetrics.java,同時參看zookeeper官網發現,在zookeeper 3.6版本之後,官網也給出了相對應的監控方式(zookeeper官網地址:https://zookeeper.apache.org/doc/r3.6.4/zookeeperMonitor.html,zookeeper監控相關文檔地址:https://github.com/apache/zookeeper/blob/master/zookeeper-docs/src/main/resources/markdown/zookeeperMonitor.md),所以本文采用部署serviceMonitor方式監控zookeeper。

  2> zookeeper部署文件由於zookeeper官方鏡像對於zookeeper部署時創建的用戶名和用戶組爲zookeeper,但是博主所部屬的K8s環境對應的用戶名、用戶組均爲root,所以如果掛載zookeeper配置文件覆蓋原有配置文件時,會報只讀文件沒有操作權限的提示信息,因此這部分在部署時,需要指定zookeeper所使用的用戶名、用戶組信息,對於zookeeper所使用的用戶名、用戶組信息,可以在先不指定用戶名、用戶組部署成功之後,進入shell控制檯,通過使用id zookeeper命令查看

  3> prometheus監控zookeeper時,需要開放7000端口,prometheus通過暴露出來的端口獲取到對應的指標數據

  4> zookeeper部署以及serviceMonitor配置,都是在Kuboard中執行,如果使用命令行或者其他可視化操作平臺請自行按照相關操作執行

1、K8s部署單節點Zookeeper配置文件

1.1、部署Deployment

---
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    app: zk-deployment
  name: zookeeper
  namespace: k8s-middleware
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zk
  template:
    metadata:
      name: zk
      labels:
        app: zk
    spec:
    # 由於zookeeper部署時定義了用戶組爲zookeeper,此處使用zookeeper的用戶組覆蓋當前的用戶組
      securityContext:
        fsGroup: 1000
        runAsGroup: 1000
        runAsUser: 1000
      volumes:
      - name: localtime
        hostPath:
          path: /usr/share/zoneinfo/Asia/Shanghai
      - configMap:
          defaultMode: 493
          name: zookeeper-configmap
        name: zkconf
      containers:
      - name: zookeeper
        image: zookeeper:3.6.2
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - name: localtime
          mountPath: /etc/localtime
          readOnly: true
        - mountPath: /conf/zoo.cfg
          name: zkconf
          subPath: zoo.cfg

1.2、部署Service

---
kind: Service
apiVersion: v1
metadata:
  name: zookeeper
  namespace: k8s-middleware
  labels:
    app: zk
spec:
  ports:
  - port: 2181
    name: client
    protocol: TCP
    targetPort: 2181
  - name: metrics
    port: 7000
    protocol: TCP
    targetPort: 7000
  clusterIP: None
  selector:
    app: zk

1.3、掛載配置文件configMap

---
apiVersion: v1
data:
  zoo.cfg: >-
    dataDir=/data

    dataLogDir=/datalog

    clientPort=2181

    tickTime=2000

    initLimit=10

    syncLimit=5

    autopurge.snapRetainCount=10

    autopurge.purgeInterval=24

    maxClientCnxns=600

    standaloneEnabled=true

    admin.enableServer=true

    server.1=localhost:2888:3888

    ## Metrics Providers

    # https://prometheus.io Metrics Exporter

    metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider

    metricsProvider.httpPort=7000

    metricsProvider.exportJvmInfo=true
kind: ConfigMap
metadata:
  name: zookeeper-configmap
  namespace: k8s-middleware

2、K8s配置ServiceMonitor監控Zookeeper

---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: zookeeper-prometheus
  namespace: k8s-middleware
spec:
  endpoints:
    - interval: 1m
      port: metrics
  namespaceSelector:
    matchNames:
      - k8s-middleware
  selector:
    matchLabels:
      app: zk

3、配置Grafana監控大盤

關於Grafana監控大盤,這裏使用的大盤ID是10465,相關介紹可參看如下地址:https://grafana.com/grafana/dashboards/10465-zookeeper-by-prometheus/

使用此大盤時,注意修改變量中數據源信息。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章