Kubernetes日志收集:log-pilot+KAFKA+Logstash+ES

通过log-pilot+KAFKA+Logstash+ES收集K8S中Pod日志

K8S部署应用后收集日志不太好搞,特别是单个服务多个实例的情况。
如果映射到外部地址,多个实例就会写到同一个文件中,无法区分是哪个应用实例的日志。
阿里开源的日志采集工具log-pilot,自动感知pod的日志输出,记录一下实践过程。

PS:log-pilot本身是支持直接写日志到ES的,但我测试没有成功,只好走个弯路,先写到KAFKA再通过Logstash写入ES,最后通过Kibana查看。
大家有时间可以研究下直接通过log-pilot写ES。

一、log-pilot安装

apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: log-pilot
  name: log-pilot
  namespace: #{选一个namespace}
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: log-pilot
  template:
    metadata:
      labels:
        app: log-pilot
    spec:
      containers:
        - env:
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: spec.nodeName
            - name: LOGGING_OUTPUT
              value: kafka
            - name: KAFKA_BROKERS
              #KAFKA_BROKERS可以填多个,逗号分割
              value: bigdataxxxx:9092
          image: 'registry.cn-hangzhou.aliyuncs.com/acs/log-pilot:0.9.7-filebeat'
          imagePullPolicy: IfNotPresent
            failureThreshold: 3
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 2
          name: log-pilot
          resources:
            limits:
              memory: 500Mi
            requests:
              cpu: 200m
              memory: 200Mi
          securityContext:
            capabilities:
              add:
                - SYS_ADMIN
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
            - mountPath: /var/run/docker.sock
              name: sock
            - mountPath: /host
              name: root
              readOnly: true
            - mountPath: /var/lib/filebeat
              name: varlib
            - mountPath: /var/log/filebeat
              name: varlog
            - mountPath: /etc/localtime
              name: localtime
              readOnly: true
      dnsPolicy: ClusterFirst
      hostAliases:
        - hostnames:
            - bigdata3216
          ip: 172.16.32.16
      restartPolicy: Always
      schedulerName: default-scheduler
      terminationGracePeriodSeconds: 30
      volumes:
        - hostPath:
            path: /var/run/docker.sock
            type: ''
          name: sock
        - hostPath:
            path: /
            type: ''
          name: root
        - hostPath:
            path: /var/lib/filebeat
            type: DirectoryOrCreate
          name: varlib
        - hostPath:
            path: /var/log/filebeat
            type: DirectoryOrCreate
          name: varlog
        - hostPath:
            path: /etc/localtime
            type: ''
          name: localtime
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate

二、安装logstash

1、创建PV、PVC,过程略

需要创建两个pvc,这里是创建了pvc-logstash和pvc-logstash-yml两个pvc

1.1、第一个pvc-logstash里面放logstash.conf
logstash.conf:主要的配置文件,输入输出

input {
  kafka {
    bootstrap_servers => ["172.16.xx.2:9092"]
    client_id => "sink"
    group_id => "sink"
    auto_offset_reset => "latest"
    consumer_threads => 3
    decorate_events => true
    topics => ["aaaasinklog"]
    type => "sinklog"
    codec => "json"
    }
}
output {
  elasticsearch {
    hosts => ["172.16.xx.x6:9200"]
    index => "aaaasinklog-%{+YYYYMM}"
    }
}

1.2、第二个pvc-logstash-yml目录里放logstash.yml和pipelines.yml

logstash.yml:

#避免logstash报错找不到xpack鉴权
xpack.monitoring.enabled: false

piplines.yml:

#声明配置文件位置
- pipeline.id: main
  path.config: "/usr/share/logstash/pipeline"

2、创建logstash

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    elastic-app: logstash
  name: logstash
  namespace: smartgo
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      elastic-app: logstash
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        elastic-app: logstash
    spec:
      containers:
        - env:
            - name: TZ
              value: Asia/Shanghai
          image: 'docker.elastic.co/logstash/logstash:6.5.4'
          imagePullPolicy: IfNotPresent
          name: logstash
          ports:
            - containerPort: 8080
              protocol: TCP
          securityContext:
            privileged: true
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
            - mountPath: /usr/share/logstash/pipeline
              name: logstash-conf-volume
            - mountPath: /usr/share/logstash/config
              name: logstash-yml-volume
      dnsPolicy: ClusterFirst
      hostAliases:
        - hostnames:
            - bigdataxxx1
          ip: 172.16.xx.1
        - hostnames:
            - bigdataxxx2
          ip: 172.16.xx.2
      restartPolicy: Always
      schedulerName: default-scheduler
      terminationGracePeriodSeconds: 30
      volumes:
        - name: logstash-conf-volume
          persistentVolumeClaim:
            claimName: pvc-logstash
        - name: logstash-yml-volume
          persistentVolumeClaim:
            claimName: pvc-logstah-yml

三、搞个应用试试日志采集

自由发挥
我找个应用就是简单的接收消息然后打印出来

apiVersion: apps/v1
kind: Deployment
metadata:
  name: vocar-sink-v2
  namespace: default
spec:
  replicas: 1
  selector:
  template:
    metadata:
    spec:
      hostAliases:
        - hostnames:
            - bigdataxxx1
          ip: 172.16.xx.1
        - hostnames:
            - bigdataxxx2
          ip: 172.16.xx.2
      containers:
        - args:
            - ''
          image: 'registry.cn-hangzhou.aliyuncs.com/xxxxxx/vocar-sink-v2:0.0.1'
          env:
            - name: TZ
              value: Asia/Shanghai
            #重要的就是这其他都没用:aliyun_logs_${name},name就是要发送的kafka里的topic,这里就是aaaasinklog
            - name: aliyun_logs_aaaasinklog
              value: stdou
          imagePullPolicy: Always
          name: vocar-sink-v2
      imagePullSecrets:
        - name: vocar-sink-v2
      restartPolicy: Always

这样就哦了,去ES里应该就有aaaasinklog-202003这个index了,里面有日志信息。
日志格式如下:

{
    "topic":"aaaasinklog",
    "docker_container":"k8s_vocar-sink-v2_vocar-sink-v2-5956d8695c-x77rz_smartgo_c27fe410-1331-4360-a516-a4432235abbf_0",
    "source":"/host/var/lib/docker/containers/bb1a63d04995af678590443099653c49989b2289f6f0370991d6f4ae64fa16a7/bb1a63d04995af678590443099653c49989b2289f6f0370991d6f4ae64fa16a7-json.log",
    "type":"sinklog",
    "offset":21855,
    "stream":"stdout",
    "beat":{
        "hostname":"log-pilot-hp7x6",
        "name":"log-pilot-hp7x6",
        "version":"6.1.1"
    },
    "message":"2020-03-26 15:34:45.606 INFO 1 --- [ntainer#0-0-C-1] com.reciever.Reciever : [get kafka-test info]iiiii",
    "k8s_container_name":"vocar-sink-v2",
    "k8s_pod_namespace":"xxxxxxx",
    "@timestamp":"2020-03-26T07:34:45.607Z",
    "index":"aaaasinklog",
    "@version":"1",
    "k8s_node_name":"bigdataxxxxx7",
    "prospector":{
        "type":"log"
    },
    "k8s_pod":"vocar-sink-v2-5956d8695c-x77rz"
}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章