Kubernetes日誌收集:log-pilot+KAFKA+Logstash+ES

通過log-pilot+KAFKA+Logstash+ES收集K8S中Pod日誌

K8S部署應用後收集日誌不太好搞,特別是單個服務多個實例的情況。
如果映射到外部地址,多個實例就會寫到同一個文件中,無法區分是哪個應用實例的日誌。
阿里開源的日誌採集工具log-pilot,自動感知pod的日誌輸出,記錄一下實踐過程。

PS:log-pilot本身是支持直接寫日誌到ES的,但我測試沒有成功,只好走個彎路,先寫到KAFKA再通過Logstash寫入ES,最後通過Kibana查看。
大家有時間可以研究下直接通過log-pilot寫ES。

一、log-pilot安裝

apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: log-pilot
  name: log-pilot
  namespace: #{選一個namespace}
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: log-pilot
  template:
    metadata:
      labels:
        app: log-pilot
    spec:
      containers:
        - env:
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: spec.nodeName
            - name: LOGGING_OUTPUT
              value: kafka
            - name: KAFKA_BROKERS
              #KAFKA_BROKERS可以填多個,逗號分割
              value: bigdataxxxx:9092
          image: 'registry.cn-hangzhou.aliyuncs.com/acs/log-pilot:0.9.7-filebeat'
          imagePullPolicy: IfNotPresent
            failureThreshold: 3
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 2
          name: log-pilot
          resources:
            limits:
              memory: 500Mi
            requests:
              cpu: 200m
              memory: 200Mi
          securityContext:
            capabilities:
              add:
                - SYS_ADMIN
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
            - mountPath: /var/run/docker.sock
              name: sock
            - mountPath: /host
              name: root
              readOnly: true
            - mountPath: /var/lib/filebeat
              name: varlib
            - mountPath: /var/log/filebeat
              name: varlog
            - mountPath: /etc/localtime
              name: localtime
              readOnly: true
      dnsPolicy: ClusterFirst
      hostAliases:
        - hostnames:
            - bigdata3216
          ip: 172.16.32.16
      restartPolicy: Always
      schedulerName: default-scheduler
      terminationGracePeriodSeconds: 30
      volumes:
        - hostPath:
            path: /var/run/docker.sock
            type: ''
          name: sock
        - hostPath:
            path: /
            type: ''
          name: root
        - hostPath:
            path: /var/lib/filebeat
            type: DirectoryOrCreate
          name: varlib
        - hostPath:
            path: /var/log/filebeat
            type: DirectoryOrCreate
          name: varlog
        - hostPath:
            path: /etc/localtime
            type: ''
          name: localtime
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate

二、安裝logstash

1、創建PV、PVC,過程略

需要創建兩個pvc,這裏是創建了pvc-logstash和pvc-logstash-yml兩個pvc

1.1、第一個pvc-logstash裏面放logstash.conf
logstash.conf:主要的配置文件,輸入輸出

input {
  kafka {
    bootstrap_servers => ["172.16.xx.2:9092"]
    client_id => "sink"
    group_id => "sink"
    auto_offset_reset => "latest"
    consumer_threads => 3
    decorate_events => true
    topics => ["aaaasinklog"]
    type => "sinklog"
    codec => "json"
    }
}
output {
  elasticsearch {
    hosts => ["172.16.xx.x6:9200"]
    index => "aaaasinklog-%{+YYYYMM}"
    }
}

1.2、第二個pvc-logstash-yml目錄裏放logstash.yml和pipelines.yml

logstash.yml:

#避免logstash報錯找不到xpack鑑權
xpack.monitoring.enabled: false

piplines.yml:

#聲明配置文件位置
- pipeline.id: main
  path.config: "/usr/share/logstash/pipeline"

2、創建logstash

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    elastic-app: logstash
  name: logstash
  namespace: smartgo
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      elastic-app: logstash
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        elastic-app: logstash
    spec:
      containers:
        - env:
            - name: TZ
              value: Asia/Shanghai
          image: 'docker.elastic.co/logstash/logstash:6.5.4'
          imagePullPolicy: IfNotPresent
          name: logstash
          ports:
            - containerPort: 8080
              protocol: TCP
          securityContext:
            privileged: true
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
            - mountPath: /usr/share/logstash/pipeline
              name: logstash-conf-volume
            - mountPath: /usr/share/logstash/config
              name: logstash-yml-volume
      dnsPolicy: ClusterFirst
      hostAliases:
        - hostnames:
            - bigdataxxx1
          ip: 172.16.xx.1
        - hostnames:
            - bigdataxxx2
          ip: 172.16.xx.2
      restartPolicy: Always
      schedulerName: default-scheduler
      terminationGracePeriodSeconds: 30
      volumes:
        - name: logstash-conf-volume
          persistentVolumeClaim:
            claimName: pvc-logstash
        - name: logstash-yml-volume
          persistentVolumeClaim:
            claimName: pvc-logstah-yml

三、搞個應用試試日誌採集

自由發揮
我找個應用就是簡單的接收消息然後打印出來

apiVersion: apps/v1
kind: Deployment
metadata:
  name: vocar-sink-v2
  namespace: default
spec:
  replicas: 1
  selector:
  template:
    metadata:
    spec:
      hostAliases:
        - hostnames:
            - bigdataxxx1
          ip: 172.16.xx.1
        - hostnames:
            - bigdataxxx2
          ip: 172.16.xx.2
      containers:
        - args:
            - ''
          image: 'registry.cn-hangzhou.aliyuncs.com/xxxxxx/vocar-sink-v2:0.0.1'
          env:
            - name: TZ
              value: Asia/Shanghai
            #重要的就是這其他都沒用:aliyun_logs_${name},name就是要發送的kafka裏的topic,這裏就是aaaasinklog
            - name: aliyun_logs_aaaasinklog
              value: stdou
          imagePullPolicy: Always
          name: vocar-sink-v2
      imagePullSecrets:
        - name: vocar-sink-v2
      restartPolicy: Always

這樣就哦了,去ES裏應該就有aaaasinklog-202003這個index了,裏面有日誌信息。
日誌格式如下:

{
    "topic":"aaaasinklog",
    "docker_container":"k8s_vocar-sink-v2_vocar-sink-v2-5956d8695c-x77rz_smartgo_c27fe410-1331-4360-a516-a4432235abbf_0",
    "source":"/host/var/lib/docker/containers/bb1a63d04995af678590443099653c49989b2289f6f0370991d6f4ae64fa16a7/bb1a63d04995af678590443099653c49989b2289f6f0370991d6f4ae64fa16a7-json.log",
    "type":"sinklog",
    "offset":21855,
    "stream":"stdout",
    "beat":{
        "hostname":"log-pilot-hp7x6",
        "name":"log-pilot-hp7x6",
        "version":"6.1.1"
    },
    "message":"2020-03-26 15:34:45.606 INFO 1 --- [ntainer#0-0-C-1] com.reciever.Reciever : [get kafka-test info]iiiii",
    "k8s_container_name":"vocar-sink-v2",
    "k8s_pod_namespace":"xxxxxxx",
    "@timestamp":"2020-03-26T07:34:45.607Z",
    "index":"aaaasinklog",
    "@version":"1",
    "k8s_node_name":"bigdataxxxxx7",
    "prospector":{
        "type":"log"
    },
    "k8s_pod":"vocar-sink-v2-5956d8695c-x77rz"
}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章