部署 Kubernetes+Heapster+InfluxDB+Grafana 詳解

1. 部署 Kubernetes

一 、安裝步驟

準備工作

關閉防火牆

爲了避免和Docker的iptables產生衝突,我們需要關閉node上的防火牆:

1
2
$ systemctl stop firewalld
$ systemctl disable firewalld

安裝NTP

爲了讓各個服務器的時間保持一致,還需要爲所有的服務器安裝NTP:

1
2
$ yum -y install ntp
$ systemctl start ntpd
$ systemctl enable ntpd

部署Master

安裝etcd和kubernetes

1
$ yum -y install etcd kubernetes

配置etcd

修改etcd的配置文件/etc/etcd/etcd.conf

1
2
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"

配置etcd中的網絡

定義etcd中的網絡配置,nodeN中的flannel service會拉取此配置

1
$ etcdctl mk /coreos.com/network/config '{"Network":"172.17.0.0/16"}'

 

配置Kubernetes API server

1
2
3
4
5
API_ADDRESS="--address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBELET_PORT="--kubelet_port=10250"
KUBE_ETCD_SERVERS="--etcd_servers=http://10.0.222.2:2379"
KUBE_SERVICE_ADDRESSES="--portal_net=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,
NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_ARGS=""

這裏需要注意原來KUBE_ADMISSION_CONTROL默認包含的ServiceAccount要刪掉,不然啓動API server的時候會報錯。

啓動服務

接下來,在Master上啓動下面的服務:

1
2
3
$for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do 
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES 
done

部署Node

安裝Kubernetes和Flannel

1
$ yum -y install flannel kubernetes

配置Flannel

修改Flannel的配置文件/etc/sysconfig/flanneld:

1
2
FLANNEL_ETCD="http://10.0.222.2:2379"
FLANNEL_ETCD_KEY="/coreos.com/network"
FLANNEL_OPTIONS="--iface=ens3"

這裏需要注意FLANNEL_OPTIONS中的iface的值是你自己服務器的網卡,不同的服務器以及配置下和我的是不一樣的。

啓動Flannel

1
2
$systemctl restart flanneld
$systemctl enable flanneld
$systemctl status flanneld

上傳網絡配置

在當前目錄下創建一個config.json,內容如下:

1
2
3
4

{
"Network": "172.17.0.0/16",
"SubnetLen": 24,
"Backend": {
     "Type": "vxlan",
     "VNI": 7890
     }
 }

然後將配置上傳到etcd服務器上:

1
$ curl -L http://10.0.222.2:2379/v2/keys/coreos.com/network/config -XPUT --data-urlencode [email protected]

修改Kubernetes配置

修改kubernetes默認的配置文件/etc/kubernetes/config:

1
KUBE_MASTER="--master=http://10.0.222.2:8080"

修改kubelet配置

修改kubelet服務的配置文件/etc/kubernetes/kubelet:

1
2
3
4
5
6
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
# change the hostname to minion IP address
KUBELET_HOSTNAME="--hostname_override=node1"
KUBELET_API_SERVER="--api_servers=http://10.0.222.2:8080"
KUBELET_ARGS=""

不同node節點只需要更改KUBELET_HOSTNAME 爲node的hostname即可。

啓動node服務

1
2
3
$ for SERVICES in kube-proxy kubelet docker; do 
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES 
done

創建快照,其他節點用快照安裝(修改相應的hostname以及KUBELET_HOSTNAME即可)

查看集羣nodes

部署完成之後,可以kubectl命令來查看整個集羣的狀態:

1      kubectl -s "http://10.0.222.2:8080" get nodes

2. 部署 InfluxDB

1.部署influxdb

配置文件

編輯配置文件

查看influxdb啓動端口(8091,8083,8086,8088)

進入influx命令行


3. 部署 Grafana

查看配置文件:

啓動grafana服務

[root@influxdbsrc]# systemctl enable grafana-server
[root@influxdbsrc]# systemctl start grafana-server
[root@influxdbsrc]# systemctl status grafana-server
●grafana-server.service-Startsandstopsasinglegrafanainstanceonthissystem
  Loaded:loaded(/usr/lib/systemd/system/grafana-server.service;disabled;vendorpreset:disabled)
  Active:active(running)sinceWed2016-03-1604:29:11EDT;6sago
    Docs:http://docs.grafana.org
MainPID:2519(grafana-server)
  CGroup:/system.slice/grafana-server.service
          └─2519/usr/sbin/grafana-server--config=/etc/grafana/grafana.ini--pidfile=cfg:default.paths.logs=/var/log/grafanacfg:default.paths.data=/var/lib/graf...
 
Mar1604:29:12influxdbgrafana-server[2519]:2016/03/1604:29:12[I]Migrator:execmigrationid:droptabledashboard_snapshot_v4#1
Mar1604:29:12influxdbgrafana-server[2519]:2016/03/1604:29:12[I]Migrator:execmigrationid:createdashboard_snapshottablev5#2
Mar1604:29:12influxdbgrafana-server[2519]:2016/03/1604:29:12[I]Migrator:execmigrationid:createindexUQE_dashboard_snapshot_key-v5
Mar1604:29:12influxdbgrafana-server[2519]:2016/03/1604:29:12[I]Migrator:execmigrationid:createindexUQE_dashboard_snapshot_delete_key-v5
Mar1604:29:12influxdbgrafana-server[2519]:2016/03/1604:29:12[I]Migrator:execmigrationid:createindexIDX_dashboard_snapshot_user_id-v5
Mar1604:29:12influxdbgrafana-server[2519]:2016/03/1604:29:12[I]Migrator:execmigrationid:alterdashboard_snapshottomediumtextv2
Mar1604:29:12influxdbgrafana-server[2519]:2016/03/1604:29:12[I]Migrator:execmigrationid:createquotatablev1
Mar1604:29:12influxdbgrafana-server[2519]:2016/03/1604:29:12[I]Migrator:execmigrationid:createindexUQE_quota_org_id_user_id_target-v1
Mar1604:29:12influxdbgrafana-server[2519]:2016/03/1604:29:12[I]Createddefaultadminuser:admin
Mar1604:29:12influxdbgrafana-server[2519]:2016/03/1604:29:12[I]Listen:http://0.0.0.0:3000
[root@influxdbsrc]#

查看啓動端口(3000)

[root@influxdbsrc]# netstat -tnlp
ActiveInternetconnections(onlyservers)
ProtoRecv-QSend-QLocalAddress          ForeignAddress        State      PID/Programname    
tcp        0      00.0.0.0:22              0.0.0.0:*              LISTEN      911/sshd            
tcp        0      0127.0.0.1:25            0.0.0.0:*              LISTEN      1476/master        
tcp        0      0127.0.0.1:8091          0.0.0.0:*              LISTEN      2424/influxd        
tcp6      0      0:::8083                :::*                    LISTEN      2424/influxd        
tcp6      0      0:::8086                :::*                    LISTEN      2424/influxd        
tcp6      0      0:::22                  :::*                    LISTEN      911/sshd            
tcp6      0      0:::3000                :::*                    LISTEN      2519/grafana-server
tcp6      0      0:::8088                :::*                    LISTEN      2424/influxd        
tcp6      0      0::1:25                  :::*                    LISTEN      1476/master        
[root@influxdbsrc]#

瀏覽器打開, http://192.168.12.172:3000

默認admin/admin

4. 下載 Heapster

docker pull index.tenxcloud.com/google_containers/heapster:v1.1.0

5. 部署使用

部署Heapster,連接influxdb,數據庫名k8s
[root@k8s_masterk8s]# more heapster-controller.yaml 
apiVersion:v1
kind:ReplicationController
metadata:
  name:heapster
  labels:
    name:heapster
spec:
  replicas:1
  selector:
    name:heapster
  template:
    metadata:
      labels:
        name:heapster
    spec:
      containers:
      -name:heapster
        image:index.tenxcloud.com/google_containers/heapster:v1.1.0
        command:
          -/heapster
          ---source=kubernetes:http://192.168.12.174:8080?inClusterConfig=false&kubeletHttps=true&kubeletPort=10250&useServiceAccount=true&auth=
          ---sink=influxdb:http://192.168.12.172:8086
[root@k8s_masterk8s]# kubectl create -f heapster-controller.yaml 
replicationcontroller"heapster"created
[root@k8s_masterk8s]# kubectl get pods
NAME            READY    STATUS    RESTARTS  AGE
frontend-9yoz1  1/1      Running  0          22m
heapster-sgflr  1/1      Running  0          36m
[root@k8s_masterk8s]# kubectl logs heapster-sgflr 
I041803:05:33.857588      1heapster.go:61]/heapster--source=kubernetes:http://192.168.12.174:8080?inClusterConfig=false&kubeletHttps=true&kubeletPort=10250&useServiceAccount=true&auth= --sink=influxdb:http://192.168.12.172:8086
I041803:05:33.857856      1heapster.go:62]Heapsterversion 1.1.0
I041803:05:33.858113      1kube_factory.go:172]UsingKubernetesclientwithmaster"http://192.168.12.174:8080"andversion"v1"
I041803:05:33.858145      1kube_factory.go:173]Usingkubeletport10250
I041803:05:33.861330      1driver.go:316]createdinfluxdbsinkwithoptions:{rootroot192.168.12.172:8086k8sfalse}
I041803:05:33.862708      1driver.go:245]Createddatabase"k8s"oninfluxDBserverat"192.168.12.172:8086"
I041803:05:33.872579      1heapster.go:72]Startingheapsteronport8082
[root@k8s_masterk8s]#


–sink參數,請參照https://github.com/kubernetes/heapster/blob/master/docs/sink-configuration.md
–source參數,請參照https://github.com/kubernetes/heapster/blob/master/docs/source-configuration.md

4. 登陸grafana,修改連接的數據庫爲k8s

heapster-01

heapster-02

6. Heapster 1.1.0 獲取數據說明


Heapster容器單獨啓動時,會連接influxdb,並創建k8s數據庫

heapster收集的數據metric的分類有兩種,【grafana搜索時,要注意】

    1)、cumulative :聚合的是【累計值】,包括cpu使用時間、網絡流入流出量,

    2)、gauge :聚合的是【瞬時值】,包括內存使用量

 

描述

分類

cpu/limit

cpu預設值,yaml文件可設置

瞬時值

cpu/node_reservation

kube節點的cpu預設值,類似cpu/limit

瞬時值

cpu/node_utilization

cpu利用率

瞬時值

cpu/request

cpu請求資源,yaml文件可設置

瞬時值

cpu/usage

cpu使用

累計值

cpu/usage_rate

cpu使用速率

瞬時值

filesystem/limit

文件系統限制

瞬時值

filesystem/usage

文件系統使用

瞬時值

memory/limit

內存限制,yaml文件可設置

瞬時值

memory/major_page_faults

內存主分頁錯誤

累計值

memory/major_page_faults_rate

內存主分頁錯誤速率

瞬時值

memory/node_reservation

節點內存預設值

瞬時值

memory/node_utilization

節點內存使用率

瞬時值

memory/page_faults

內存分頁錯誤

瞬時值

memory/page_faults_rate

內存分頁錯誤速率

瞬時值

memory/request

內存申請,yaml文件可設置

瞬時值

memory/usage

內存使用

瞬時值

memory/working_set

內存工作使用

瞬時值

network/rx

網絡接收總流量

累計值

network/rx_errors

網絡接收錯誤數

不確定

network/rx_errors_rate

網絡接收錯誤數速率

瞬時值

network/rx_rate

網絡接收速率

瞬時值

network/tx

網絡發送總流量

累計值

network/tx_errors

網絡發送錯誤數

不確定

network/tx_errors_rate

網絡發送錯誤數速率

瞬時值

network/tx_rate

網絡發送速率

瞬時值

uptime

容器啓動時間,毫秒

瞬時值




參考文章
http://www.th7.cn/db/mssql/201608/200799.shtml
http://www.pangxie.space/docker/727
http://qoofan.com/read/PndPEP7vnJ.html


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章