k8s集羣之日誌收集EFK架構
參考文檔
http://tonybai.com/2017/03/03/implement-kubernetes-cluster-level-logging-with-fluentd-and-elasticsearch-stack/
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch
https://t.goodrain.com/t/k8s/242
http://logz.io/blog/kubernetes-log-analysis/
http://blog.csdn.net/gsying1474/article/details/52426366
http://www.cnblogs.com/zhangjiayong/p/6203025.html
http://stackoverflow.com/questions/41686681/fluentd-es-v1-22-daemonset-doesnt-create-any-pod
下列文檔簡單的系統的測試了k8s 1.5.x系列:包括部署集羣、創建POD、域名解析、儀表盤、監控、反向代理、存儲、日誌,另外雙向認證自己建證書不太實用就沒有列出。本系列文檔環境部署使用二進制程序綠色安裝,適用於1.5.2、1.5.3、1.5.4及後續版本,只是記得隨時更新github上樣例url即可。
k8s集羣安裝部署
http://jerrymin.blog.51cto.com/3002256/1898243
k8s集羣RC、SVC、POD部署
http://jerrymin.blog.51cto.com/3002256/1900260
k8s集羣組件kubernetes-dashboard和kube-dns部署
http://jerrymin.blog.51cto.com/3002256/1900508
k8s集羣監控組件heapster部署
http://jerrymin.blog.51cto.com/3002256/1904460
k8s集羣反向代理負載均衡組件部署
http://jerrymin.blog.51cto.com/3002256/1904463
k8s集羣掛載volume之nfs
http://jerrymin.blog.51cto.com/3002256/1906778
k8s集羣掛載volume之glusterfs
http://jerrymin.blog.51cto.com/3002256/1907274
k8s集羣日誌收集ELK架構
http://jerrymin.blog.51cto.com/3002256/1907282
技術實現
本文使用k8s官方推薦方案,就說說集羣啓動時會在每個機器啓動一個Fluentd agent收集日誌然後發送給Elasticsearch。
實現方式是每個agent掛載目錄/var/lib/docker/containers使用fluentd的tail插件掃描每個容器日誌文件,直接發送給Elasticsearch。
提前下載好鏡像
[root@k8s-node1 ~]# docker pull gcr.io/google_containers/elasticsearch:v2.4.1
[root@k8s-node1 ~]# docker pull gcr.io/google_containers/kibana:v4.6.1
[root@k8s-node1 ~]# docker pull gcr.io/google_containers/fluentd-elasticsearch:1.22
[root@k8s-node1 ~]# docker images |grep el
registry.access.redhat.com/rhel7/pod-infrastructure latest 34d3450d733b 5 weeks ago 205 MB
gcr.io/google_containers/fluentd-elasticsearch 1.22 7896bdf952bf 8 weeks ago 266.2 MB
gcr.io/google_containers/elasticsearch
[root@k8s-master fluentd-elasticsearch]# pwd
/usr/local/kubernetes/cluster/addons/fluentd-elasticsearch
[root@k8s-master fluentd-elasticsearch]# ls
es-controller.yaml es-service.yaml fluentd-es-image kibana-image
es-image fluentd-es-ds.yaml kibana-controller.yaml kibana-service.yaml
先創建elasticsearch和kibanan
[root@k8s-master fluentd-elasticsearch]# kubectl create -f es-controller.yaml
[root@k8s-master fluentd-elasticsearch]# kubectl create -f es-service.yaml
[root@k8s-master fluentd-elasticsearch]# kubectl create -f kibana-controller.yaml
[root@k8s-master fluentd-elasticsearch]# kubectl create -f kibana-service.yaml
最後創建fluentd
[root@k8s-master fluentd-elasticsearch]# kubectl create -f fluentd-es-ds.yaml
error: error validating "fluentd-es-ds.yaml": error validating data: found invalid field tolerations for v1.PodSpec; if you choose to ignore these errors, turn
validation off with --validate=false
且註釋這三行
#tolerations:
#- key : "node.alpha.kubernetes.io/ismaster"
#effect: "NoSchedule"
創建fluentd報錯,查考解決
I found the solution after studying https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/fluentd-elasticsearch/fluentd-es-ds.yaml
There is nodeSelector: set as alpha.kubernetes.io/fluentd-ds-ready: "true"
But nodes doesn't have a label like that. What I did is add the label as below to one node to check whether it's working.
kubectl label nodes {node_name} alpha.kubernetes.io/fluentd-ds-ready="true"
[root@k8s-master fluentd-elasticsearch]# kubectl label nodes k8s-node1 alpha.kubernetes.io/fluentd-ds-ready="true"
node "k8s-node1" labeled
[root@k8s-master fluentd-elasticsearch]# kubectl label nodes k8s-node2 alpha.kubernetes.io/fluentd-ds-ready="true"
node "k8s-node2" labeled
[root@k8s-master fluentd-elasticsearch]# kubectl label nodes k8s-node3 alpha.kubernetes.io/fluentd-ds-ready="true"
node "k8s-node3" labeled
[root@k8s-master fluentd-elasticsearch]# kubectl get pods -n kube-system |grep fluentd
fluentd-es-v1.22-95ht2 1/1 Running 0 1m
fluentd-es-v1.22-k905f 1/1 Running 0 1m
fluentd-es-v1.22-w9q88 1/1 Running 0 1m
點擊創建kibana日誌即可
http://172.17.3.20:8080/api/v1/proxy/namespaces/kube-system/services/kibana-logging/app/kibana#
默認index是logstash-*