參考文章
-
版權聲明:本文爲CSDN博主「common_util」的原創文章,遵循 CC 4.0 BY-SA
版權協議,轉載請附上原文出處鏈接及本聲明。
原文鏈接:https://blog.csdn.net/shenhonglei1234/article/details/80827570 -
作者:amdaous
鏈接:https://www.jianshu.com/p/ac8853927528
來源:簡書 著作權歸作者所有。商業轉載請聯繫作者獲得授權,非商業轉載請註明出處。
背景
-
Prometheus介紹
Prometheus(普羅米修斯)是一套開源的監控&報警&時間序列數據庫的組合,起始是由SoundCloud公司開發的。隨着發展,越來越多公司和組織接受採用Prometheus,社會也十分活躍,他們便將它獨立成開源項目,並且有公司來運作。Google
SRE的書內也曾提到跟他們BorgMon監控系統相似的實現是Prometheus。現在最常見的Kubernetes容器管理系統中,通常會搭配Prometheus進行監控。
Prometheus基本原理是通過HTTP協議週期性抓取被監控組件的狀態,這樣做的好處是任意組件只要提供HTTP接口就可以接入監控系統,不需要任何SDK或者其他的集成過程。這樣做非常適合虛擬化環境比如VM或者Docker
。 Prometheus應該是爲數不多的適合Docker、Mesos、Kubernetes環境的監控系統之一。
輸出被監控組件信息的HTTP接口被叫做exporter
。目前互聯網公司常用的組件大部分都有exporter可以直接使用,比如Varnish、Haproxy、Nginx、MySQL、Linux
系統信息 (包括磁盤、內存、CPU、網絡等等),具體支持的源看:https://github.com/prometheus。與其他監控系統相比,Prometheus的主要特點是:
1.一個多維數據模型(時間序列由指標名稱定義和設置鍵/值尺寸);
2.非常高效的存儲,平均一個採樣數據佔~3.5bytes左右,320萬的時間序列,每30秒採樣,保持60天,消耗磁盤大概228G;
3.一種靈活的查詢語言;
4.不依賴分佈式存儲,單個服務器節點;
5.時間集合通過HTTP上的PULL模型進行;
6.通過中間網關支持推送時間;
7.通過服務發現或靜態配置發現目標;
8.多種模式的圖形和儀表板支持。 -
Grafana介紹
Grafana是一個跨平臺的開源的度量分析和可視化工具,可以通過將採集的數據查詢然後可視化的展示,並及時通知。它主要有以下六大特點:
1、展示方式:快速靈活的客戶端圖表,面板插件有許多不同方式的可視化指標和日誌,官方庫中具有豐富的儀表盤插件,比如熱圖、折線圖、圖表等多種展示方式;
2、數據源:Graphite,InfluxDB,OpenTSDB,Prometheus,Elasticsearch,CloudWatch和KairosDB等;
3、通知提醒:以可視方式定義最重要指標的警報規則,Grafana將不斷計算併發送通知,在數據達到閾值時通過Slack、PagerDuty等獲得通知;
4、混合展示:在同一圖表中混合使用不同的數據源,可以基於每個查詢指定數據源,甚至自定義數據源;
5、註釋:使用來自不同數據源的豐富事件註釋圖表,將鼠標懸停在事件上會顯示完整的事件元數據和標記;
6、過濾器:Ad-hoc過濾器允許動態創建新的鍵/值過濾器,這些過濾器會自動應用於使用該數據源的所有查詢。
搭建NFS服務
-
Master節點安裝
yum install -y nfs-utils rpcbind
-
其他node安裝
yum install -y nfs-utils
-
主節點配置
192.168.0.0/24:這個是運行訪問NFS的IP範圍,也就是192.168.0開頭的IP,24是掩碼長度。 根據自己的k8s主機網段設置。
(rw,no_root_squash,no_all_squash,sync):
可以設定的參數主要有以下這些:
rw:可讀寫的權限;
ro:只讀的權限;
no_root_squash:登入到NFS主機的用戶如果是root,該用戶即擁有root權限;
root_squash:登入NFS主機的用戶如果是root,該用戶權限將被限定爲匿名使用者nobody;
all_squash:不管登陸NFS主機的用戶是何權限都會被重新設定爲匿名使用者nobody。
anonuid:將登入NFS主機的用戶都設定成指定的user id,此ID必須存在於/etc/passwd中。
anongid:同anonuid,但是變成group ID就是了!
sync:資料同步寫入存儲器中。
async:資料會先暫時存放在內存中,不會直接寫入硬盤。
insecure:允許從這臺機器過來的非授權訪問。$ vi /etc/exports /nfs/prometheus/data/ 192.168.0.0/24(insecure,rw,no_root_squash,no_all_squash,sync) /nfs/grafana/data/ 192.168.0.0/24(insecure,rw,no_root_squash,no_all_squash,sync)
-
創建相應目錄
#exports中的配置的內容,需要創建下/nfs/prometheus/data/ mkdir -p /nfs/prometheus/data/ #修改權限 chmod -R 777 /nfs/prometheus/data/ #驗證配置的/nfs/prometheus/data/是否正確 exportfs -r
#exports中的配置的內容,需要創建下/nfs/grafana/data/ mkdir -p /nfs/grafana/data/ #修改權限 chmod -R 777 /nfs/grafana/data/ #驗證配置的/nfs/grafana/data/ 是否正確 exportfs -r
-
啓動服務
#主節點 systemctl enable nfs systemctl start nfs systemctl status nfs #所有節點 systemctl enable rpcbind systemctl start rpcbind systemctl status rpcbind
-
檢驗
NFS客戶端的操作:
1、showmout命令對於NFS的操作和查錯有很大的幫助,所以我們先來看一下showmount的用法
showmout
-a :這個參數是一般在NFS SERVER上使用,是用來顯示已經mount上本機nfs目錄的cline機器。
-e :顯示指定的NFS SERVER上export出來的目錄。
2、mount nfs目錄的方法:
mount -t nfs hostname(orIP):/directory /mount/point$ showmount -e 192.168.0.111 Export list for 192.168.0.111: /nfs/grafana/data 192.168.0.0/24 /nfs/prometheus/data 192.168.0.0/24
安裝Prometheus
-
kubernetest集羣中創建namespace
#編寫namespace.yaml文件 apiVersion: v1 kind: Namespace metadata: name: ns-monitor labels: name: ns-monitor
kubectl apply -f namespace.yaml
-
安裝node-exporter
在kubernetest集羣中部署node-exporter,Node-exporter用於採集kubernetes集羣中各個節點的物理指標,比如:Memory、CPU等。可以直接在每個物理節點是直接安裝,這裏我們使用DaemonSet部署到每個節點上,使用 hostNetwork: true 和 hostPID: true 使其獲得Node的物理指標信息,配置tolerations使其在master節點也啓動一個pod
#編寫node-exporter.yml kind: DaemonSet apiVersion: apps/v1 metadata: labels: app: node-exporter name: node-exporter namespace: ns-monitor spec: revisionHistoryLimit: 10 selector: matchLabels: app: node-exporter template: metadata: labels: app: node-exporter spec: containers: - name: node-exporter image: prom/node-exporter:v0.16.0 ports: - containerPort: 9100 protocol: TCP name: http hostNetwork: true hostPID: true tolerations: - effect: NoSchedule operator: Exists --- kind: Service apiVersion: v1 metadata: labels: app: node-exporter name: node-exporter-service namespace: ns-monitor spec: ports: - name: http port: 9100 nodePort: 31672 protocol: TCP type: NodePort selector: app: node-exporter
檢驗node-exporter是否成功運行
$ kubectl get pod -n ns-monitor NAME READY STATUS RESTARTS AGE grafana-576db894c6-tvvgx 1/1 Running 0 2d15h node-exporter-jkt2g 1/1 Running 2 2d17h node-exporter-lkk27 1/1 Running 2 2d17h prometheus-dd69c4889-d8hf6 1/1 Running 0 2d15h
瀏覽器訪問: http://主機ip:31672/metrics
-
部署Prometheus pod
prometheus.yaml 中包含rbac認證、ConfigMap等。
#編寫prometheus.yaml文件 --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: prometheus rules: - apiGroups: [""] # "" indicates the core API group resources: - nodes - nodes/proxy - services - endpoints - pods verbs: - get - watch - list - apiGroups: - extensions resources: - ingresses verbs: - get - watch - list - nonResourceURLs: ["/metrics"] verbs: - get --- apiVersion: v1 kind: ServiceAccount metadata: name: prometheus namespace: ns-monitor labels: app: prometheus --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: prometheus subjects: - kind: ServiceAccount name: prometheus namespace: ns-monitor roleRef: kind: ClusterRole name: prometheus apiGroup: rbac.authorization.k8s.io --- apiVersion: v1 kind: ConfigMap metadata: name: prometheus-conf namespace: ns-monitor labels: app: prometheus data: prometheus.yml: |- # my global config global: scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. # scrape_timeout is set to the global default (10s). # Alertmanager configuration alerting: alertmanagers: - static_configs: - targets: # - alertmanager:9093 # Load rules once and periodically evaluate them according to the global 'evaluation_interval'. rule_files: # - "first_rules.yml" # - "second_rules.yml" # A scrape configuration containing exactly one endpoint to scrape: # Here it's Prometheus itself. scrape_configs: # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config. - job_name: 'prometheus' # metrics_path defaults to '/metrics' # scheme defaults to 'http'. static_configs: - targets: ['localhost:9090'] - job_name: 'grafana' static_configs: - targets: - 'grafana-service.ns-monitor:3000' - job_name: 'kubernetes-apiservers' kubernetes_sd_configs: - role: endpoints # Default to scraping over https. If required, just disable this or change to # `http`. scheme: https # This TLS & bearer token file config is used to connect to the actual scrape # endpoints for cluster components. This is separate to discovery auth # configuration because discovery & scraping are two separate concerns in # Prometheus. The discovery auth config is automatic if Prometheus runs inside # the cluster. Otherwise, more config options have to be provided within the # <kubernetes_sd_config>. tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt # If your node certificates are self-signed or use a different CA to the # master CA, then disable certificate verification below. Note that # certificate verification is an integral part of a secure infrastructure # so this should only be disabled in a controlled environment. You can # disable certificate verification by uncommenting the line below. # # insecure_skip_verify: true bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token # Keep only the default/kubernetes service endpoints for the https port. This # will add targets for each API server which Kubernetes adds an endpoint to # the default/kubernetes service. relabel_configs: - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] action: keep regex: default;kubernetes;https # Scrape config for nodes (kubelet). # # Rather than connecting directly to the node, the scrape is proxied though the # Kubernetes apiserver. This means it will work if Prometheus is running out of # cluster, or can't connect to nodes for some other reason (e.g. because of # firewalling). - job_name: 'kubernetes-nodes' # Default to scraping over https. If required, just disable this or change to # `http`. scheme: https # This TLS & bearer token file config is used to connect to the actual scrape # endpoints for cluster components. This is separate to discovery auth # configuration because discovery & scraping are two separate concerns in # Prometheus. The discovery auth config is automatic if Prometheus runs inside # the cluster. Otherwise, more config options have to be provided within the # <kubernetes_sd_config>. tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token kubernetes_sd_configs: - role: node relabel_configs: - action: labelmap regex: __meta_kubernetes_node_label_(.+) - target_label: __address__ replacement: kubernetes.default.svc:443 - source_labels: [__meta_kubernetes_node_name] regex: (.+) target_label: __metrics_path__ replacement: /api/v1/nodes/${1}/proxy/metrics # Scrape config for Kubelet cAdvisor. # # This is required for Kubernetes 1.7.3 and later, where cAdvisor metrics # (those whose names begin with 'container_') have been removed from the # Kubelet metrics endpoint. This job scrapes the cAdvisor endpoint to # retrieve those metrics. # # In Kubernetes 1.7.0-1.7.2, these metrics are only exposed on the cAdvisor # HTTP endpoint; use "replacement: /api/v1/nodes/${1}:4194/proxy/metrics" # in that case (and ensure cAdvisor's HTTP server hasn't been disabled with # the --cadvisor-port=0 Kubelet flag). # # This job is not necessary and should be removed in Kubernetes 1.6 and # earlier versions, or it will cause the metrics to be scraped twice. - job_name: 'kubernetes-cadvisor' # Default to scraping over https. If required, just disable this or change to # `http`. scheme: https # This TLS & bearer token file config is used to connect to the actual scrape # endpoints for cluster components. This is separate to discovery auth # configuration because discovery & scraping are two separate concerns in # Prometheus. The discovery auth config is automatic if Prometheus runs inside # the cluster. Otherwise, more config options have to be provided within the # <kubernetes_sd_config>. tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token kubernetes_sd_configs: - role: node relabel_configs: - action: labelmap regex: __meta_kubernetes_node_label_(.+) - target_label: __address__ replacement: kubernetes.default.svc:443 - source_labels: [__meta_kubernetes_node_name] regex: (.+) target_label: __metrics_path__ replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor # Scrape config for service endpoints. # # The relabeling allows the actual service scrape endpoint to be configured # via the following annotations: # # * `prometheus.io/scrape`: Only scrape services that have a value of `true` # * `prometheus.io/scheme`: If the metrics endpoint is secured then you will need # to set this to `https` & most likely set the `tls_config` of the scrape config. # * `prometheus.io/path`: If the metrics path is not `/metrics` override this. # * `prometheus.io/port`: If the metrics are exposed on a different port to the # service then set this appropriately. - job_name: 'kubernetes-service-endpoints' kubernetes_sd_configs: - role: endpoints relabel_configs: - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] action: replace target_label: __scheme__ regex: (https?) - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] action: replace target_label: __address__ regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 - action: labelmap regex: __meta_kubernetes_service_label_(.+) - source_labels: [__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_service_name] action: replace target_label: kubernetes_name # Example scrape config for probing services via the Blackbox Exporter. # # The relabeling allows the actual service scrape endpoint to be configured # via the following annotations: # # * `prometheus.io/probe`: Only probe services that have a value of `true` - job_name: 'kubernetes-services' metrics_path: /probe params: module: [http_2xx] kubernetes_sd_configs: - role: service relabel_configs: - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe] action: keep regex: true - source_labels: [__address__] target_label: __param_target - target_label: __address__ replacement: blackbox-exporter.example.com:9115 - source_labels: [__param_target] target_label: instance - action: labelmap regex: __meta_kubernetes_service_label_(.+) - source_labels: [__meta_kubernetes_namespace] target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_service_name] target_label: kubernetes_name # Example scrape config for probing ingresses via the Blackbox Exporter. # # The relabeling allows the actual ingress scrape endpoint to be configured # via the following annotations: # # * `prometheus.io/probe`: Only probe services that have a value of `true` - job_name: 'kubernetes-ingresses' metrics_path: /probe params: module: [http_2xx] kubernetes_sd_configs: - role: ingress relabel_configs: - source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe] action: keep regex: true - source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path] regex: (.+);(.+);(.+) replacement: ${1}://${2}${3} target_label: __param_target - target_label: __address__ replacement: blackbox-exporter.example.com:9115 - source_labels: [__param_target] target_label: instance - action: labelmap regex: __meta_kubernetes_ingress_label_(.+) - source_labels: [__meta_kubernetes_namespace] target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_ingress_name] target_label: kubernetes_name # Example scrape config for pods # # The relabeling allows the actual pod scrape endpoint to be configured via the # following annotations: # # * `prometheus.io/scrape`: Only scrape pods that have a value of `true` # * `prometheus.io/path`: If the metrics path is not `/metrics` override this. # * `prometheus.io/port`: Scrape the pod on the indicated port instead of the # pod's declared ports (default is a port-free target if none are declared). - job_name: 'kubernetes-pods' kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 target_label: __address__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+) - source_labels: [__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_pod_name] action: replace target_label: kubernetes_pod_name --- apiVersion: v1 kind: ConfigMap metadata: name: prometheus-rules namespace: ns-monitor labels: app: prometheus data: cpu-usage.rule: | groups: - name: NodeCPUUsage rules: - alert: NodeCPUUsage expr: (100 - (avg by (instance) (irate(node_cpu{name="node-exporter",mode="idle"}[5m])) * 100)) > 75 for: 2m labels: severity: "page" annotations: summary: "{{$labels.instance}}: High CPU usage detected" description: "{{$labels.instance}}: CPU usage is above 75% (current value is: {{ $value }})" --- apiVersion: v1 kind: PersistentVolume metadata: name: "prometheus-data-pv" labels: name: prometheus-data-pv release: stable spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Recycle nfs: path: /nfs/prometheus/data server: 192.168.0.111 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: prometheus-data-pvc namespace: ns-monitor spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi selector: matchLabels: name: prometheus-data-pv release: stable --- kind: Deployment apiVersion: apps/v1 metadata: labels: app: prometheus name: prometheus namespace: ns-monitor spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: prometheus template: metadata: labels: app: prometheus spec: serviceAccountName: prometheus securityContext: runAsUser: 0 containers: - name: prometheus image: prom/prometheus:latest imagePullPolicy: IfNotPresent volumeMounts: - mountPath: /prometheus name: prometheus-data-volume - mountPath: /etc/prometheus/prometheus.yml name: prometheus-conf-volume subPath: prometheus.yml - mountPath: /etc/prometheus/rules name: prometheus-rules-volume ports: - containerPort: 9090 protocol: TCP volumes: - name: prometheus-data-volume persistentVolumeClaim: claimName: prometheus-data-pvc - name: prometheus-conf-volume configMap: name: prometheus-conf - name: prometheus-rules-volume configMap: name: prometheus-rules tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule --- kind: Service apiVersion: v1 metadata: annotations: prometheus.io/scrape: 'true' labels: app: prometheus name: prometheus-service namespace: ns-monitor spec: ports: - port: 9090 targetPort: 9090 selector: app: prometheus type: NodePort
kubectl apply -f prometheus.yaml
檢驗是否正常運行
$ kubectl get pod -n ns-monitor NAME READY STATUS RESTARTS AGE grafana-576db894c6-tvvgx 1/1 Running 0 2d15h node-exporter-jkt2g 1/1 Running 2 2d17h node-exporter-lkk27 1/1 Running 2 2d17h prometheus-dd69c4889-d8hf6 1/1 Running 0 2d15h
kubectl get svc -n ns-monitor NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE grafana-service NodePort 192.168.0.130 <none> 3000:31683/TCP 2d15h node-exporter-service NodePort 192.168.0.210 <none> 9100:31672/TCP 2d17h prometheus-service NodePort 192.168.0.226 <none> 9090:20629/TCP 2d15h
瀏覽器訪問: http://主機ip:20629/graph
安裝Grafana
-
安裝grafana
#編寫grafana.yml文件 apiVersion: v1 kind: PersistentVolume metadata: name: "grafana-data-pv" labels: name: grafana-data-pv release: stable spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Recycle nfs: path: /nfs/grafana/data server: 192.168.0.111 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: grafana-data-pvc namespace: ns-monitor spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi selector: matchLabels: name: grafana-data-pv release: stable --- kind: Deployment apiVersion: apps/v1 metadata: labels: app: grafana name: grafana namespace: ns-monitor spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: grafana template: metadata: labels: app: grafana spec: securityContext: runAsUser: 0 containers: - name: grafana image: grafana/grafana:latest imagePullPolicy: IfNotPresent env: - name: GF_AUTH_BASIC_ENABLED value: "true" - name: GF_AUTH_ANONYMOUS_ENABLED value: "false" readinessProbe: httpGet: path: /login port: 3000 volumeMounts: - mountPath: /var/lib/grafana name: grafana-data-volume ports: - containerPort: 3000 protocol: TCP volumes: - name: grafana-data-volume persistentVolumeClaim: claimName: grafana-data-pvc --- kind: Service apiVersion: v1 metadata: labels: app: grafana name: grafana-service namespace: ns-monitor spec: ports: - port: 3000 targetPort: 3000 selector: app: grafana type: NodePort
kubectl apply -f grafana.yaml
檢驗是否正常運行
$ kubectl get pod -n ns-monitor NAME READY STATUS grafana-677d945674-56m5n 1/1 Running node-exporter-vkpt2 1/1 Running node-exporter-zkh9s 1/1 Running prometheus-6c9574d5ff-292bq 1/1 Running $ kubectl get svc -n ns-monitor NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE grafana-service NodePort 192.168.0.130 <none> 3000:31683/TCP 2d16h node-exporter-service NodePort 192.168.0.210 <none> 9100:31672/TCP 2d18h prometheus-service NodePort 192.168.0.226 <none> 9090:20629/TCP 2d16h
瀏覽器訪問: http://主機ip:31683/graph/login 默認用戶名和密碼:admin/admin
-
配置grafana數據源