部署EFK Stack收集K8s日誌

一、下載efk相關安裝文件

1.1、下載對應的EFK yaml配置文件

[root@k8s-master01 k8s]# cd efk-7.10.2/
[root@k8s-master01 efk-7.10.2]# ls
create-logging-namespace.yaml  es-service.yaml  es-statefulset.yaml  filebeat  fluentd-es-configmap.yaml  fluentd-es-ds.yaml  kafka  kibana-deployment.yaml  kibana-service.yaml 

 1.2、創建EFK所需要的Namespace

[root@k8s-master01 efk-7.10.2]# kubectl create -f create-logging-namespace.yaml
namespace/logging created
[root@k8s-master01 efk-7.10.2]# kubectl get ns
logging                           Active   3sc 

1.3、創建ES的service

[root@k8s-master01 efk-7.10.2]# kubectl create -f es-service.yaml
service/elasticsearch-logging created
[root@k8s-master01 efk-7.10.2]# kubectl get svc -n logging
NAME                    TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)             AGE
elasticsearch-logging   ClusterIP   None         <none>        9200/TCP,9300/TCP   37sc 

二、創建ES集羣

只要創建ServiceAccount、ClusterRole、ClusterRoleBinding、statefulset相應的容器

[root@k8s-master01 efk-7.10.2]# kubectl create -f es-statefulset.yaml
serviceaccount/elasticsearch-logging created
clusterrole.rbac.authorization.k8s.io/elasticsearch-logging created
clusterrolebinding.rbac.authorization.k8s.io/elasticsearch-logging created
statefulset.apps/elasticsearch-logging created
[root@k8s-master01 efk-7.10.2]# kubectl get  statefulset -n logging
NAME                    READY   AGE
elasticsearch-logging   0/1     75s
[root@k8s-master01 efk-7.10.2]# kubectl get  pod  -n logging
NAME                      READY   STATUS            RESTARTS   AGE
elasticsearch-logging-0   0/1     PodInitializing   0          83s
[root@k8s-master01 efk-7.10.2]# kubectl get  ClusterRoleBinding -n logging | grep elas
elasticsearch-logging                                  ClusterRole/elasticsearch-logging                                                  4m54s
[root@k8s-master01 efk-7.10.2]# kubectl get  ClusterRole-n logging | grep elas
error: the server doesn't have a resource type "ClusterRole-n"
[root@k8s-master01 efk-7.10.2]# kubectl get  ClusterRole -n logging | grep elas
elasticsearch-logging                                                  2022-05-23T03:35:19Z
[root@k8s-master01 efk-7.10.2]# kubectl get  sa -n logging
NAME                    SECRETS   AGE
default                 1         13m
elasticsearch-logging   1         5m32s

三、創建Kibana

root@k8s-master01 efk-7.10.2]# kubectl create -f   kibana-deployment.yaml  -f kibana-service.yaml
deployment.apps/kibana-logging created
service/kibana-logging created

四、部署Fluentd daemonSet 容器

4.1、部署ServiceAccount、ClusterRole、ClusterRoleBinding

[root@k8s-master01 efk-7.10.2]# kubectl create -f  fluentd-es-ds.yaml
serviceaccount/fluentd-es created
clusterrole.rbac.authorization.k8s.io/fluentd-es created
clusterrolebinding.rbac.authorization.k8s.io/fluentd-es created
daemonset.apps/fluentd-es-v3.1.1 createdb

4.2、部署Fluentd依賴的configmap

[root@k8s-master01 efk-7.10.2]# kubectl create  -f fluentd-es-configmap.yaml

Ps:這裏需要注意根據自己實際情況確定是否收集所有node節點日誌,如果是,可以將其fluentd-es-ds.yaml配置文件中的nodeSelector註釋

這裏爲了驗證.僅給k8s-node01 打上標籤,只收集該節點日誌

#kubectl label node k8s-node01 fluentd=true

 

可以發現EFK環境都已經正常運行

五、Nginx代理 

這裏可以選擇性配置,如果你不想配置域名,可直接通過節點IP+30700直接訪問即可

 本人在這裏給出nginx代理配置,可供參考

驗證訪問代理域名

添加數據

創建索引

創建索引

下面即可展示日誌

END!

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章