服務器爲centos7
1:到docker hub 中搜索 grafana 選擇最新版本 grafana/grafana:latest
2: 到docker hub 中搜索 prometheus 選擇最新版本 prom/prometheus:v2.10.0
2: 到docker hub 中搜索 node-exporter 選擇最新版本 prom/node-exporter:v0.17.0
3:通過kuboard 部署 以上三個個鏡像
其中部署 prometheus 比較麻煩,步驟如下:
1:需要在服務器安裝 nfs https://blog.csdn.net/qq_38265137/article/details/83146421
2:安裝好後,創建存儲類
創建存儲聲明
創建secrets,配置etcd-certs,到k8s路徑下取到這兩個crt,和key的值
配置prometheus config文件
global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
external_labels:
monitor: 'codelab-monitor'
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090']
- job_name: 'node'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['192.168.7.123:30091']
- job_name: 'heapster'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['192.168.7.123:30080']
- job_name: 'metrics'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['monitor-kube-state-metrics:8080']
- job_name: 'gateway'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['192.168.7.115:30020']
部署prometheus
之後訪問 ip:端口號 訪問grafana
1:配置prometheus數據源
可以去 https://grafana.com/dashboards 自行搜尋,在這裏我們用一名國人爲 node_exporter
寫的 Dashboard
,對應的主頁爲 https://grafana.com/dashboards/8919
如果不能聯網:需要導入以下鏡像
quay.io/external_storage/nfs-client-provisioner:v3.1.0-k8s1.11
下面配置爲空