Filebeat、Logstash讀取多個topic並輸出到ES集羣

一、配置filebeat.yml

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /usr/local/nginx/logs/access.log
#此處加上fields和log_topics字段,log_topics=nginx和下面的log_topics=httpd主要用於區分兩個日誌,這兩個字段的值可以自定義
  fields:
    log_topics: nginx
  #最好在每個日誌裏都寫上下面這三行JSON的配置,不然日誌寫到kafka,在輸出到ES的時候,會以整條message的形式顯示出來,不利於過濾,親測過。
  json.keys_under_root: true
  json.overwrite_keys: true
  json.add_error_key: true

- type: log
  enabled: true
  paths:
    - /var/log/httpd/access_log.ls_json
  fields:
    log_topics: httpd
  json.keys_under_root: true
  json.overwrite_keys: true
  json.add_error_key: true

#下面是output.kafka的配置
#=========================== Kafka Output=====================
output.kafka:
  enabled: true
  hosts: ["10.1.1.17:9092"]
  #此處topic取值是上面新加的字段的值
  topic: '%{[fields][log_topics]}'

配置完成後,重啓filebeat服務,沒有報錯的話,可以在kafka中創建了nginx和httpd兩個topic:

[root@elk kafka]# bin/kafka-topics.sh  --zookeeper 10.1.1.17:2181 --list
__consumer_offsets
httpd
nginx

topic創建成功了,下面接着配置logstash:

二、配置Logstash

[root@elk kafka]# cat /etc/logstash/conf.d/kafka-es.conf 
input {
    kafka  {
        codec => "json"
        #此處把兩個topic的名稱都寫進去,用英文狀態的逗號隔開
        topics => ["nginx","httpd"]
        bootstrap_servers => "10.1.1.17:9092"
        auto_offset_reset => "latest"
   }
}

filter {
#filter部分也要加個if判斷,不同的topic採取不同的處理措施
#topic=nginx
    if[fields][log_topics] == "nginx" {
        date {
            match => ["requesttime", "dd/MMM/yyyy:HH:mm:ss Z +08:00"]
            target => "@timestamp"
        }
        ruby {
            code => "event.set('timestamp', event.get('@timestamp').time.utc+8*60*60)"
        }
        mutate {
            convert => ["timestamp", "string"]
            gsub => ["timestamp", "T([\S\s]*?)Z", ""]
            gsub => ["timestamp", "-", "."]
        }
        mutate {
            remove_field => ["_index","_id","_type","_version","_score","host","log","referer","input","path","agent"]
        }
    }
#topic=httpd
    if[fields][log_topics] == "httpd" {
        date {
            match => ["requesttime", "dd/MMM/yyyy:HH:mm:ss Z +08:00"]
            target => "@timestamp"
        }
        ruby {
            code => "event.set('timestamp', event.get('@timestamp').time.utc+8*60*60)"
        }
        mutate {
            convert => ["timestamp", "string"]
            gsub => ["timestamp", "T([\S\s]*?)Z", ""]
            gsub => ["timestamp", "-", "."]
        }
        mutate {
            remove_field => ["_index","_id","_type","_version","_score","host","log","referer","input","path","agent"]
        }
    }
}

output {
    elasticsearch {
        hosts => ["10.1.1.17:9200"]
        #此處的索引名稱,我們設定爲引用filebeat中配置的字段,這樣就會自動創建兩個帶日期的索引,
        index =>  '%{[fields][log_topics]}+%{timestamp}'
    }
} 

三、Kibana查看索引

然後啓動logstash,無報錯的話,即可在kibana的管理界面-索引管理看到新建的nginx+2020.03.11和httpd+2020.03.11索引:我們據此創建索引模式,分別爲nginx+2020.03.11和httpd+2020.03.11,然後分別查看日誌情況:
在這裏插入圖片描述可以看到http的日誌,都在httpd索引下,nginx的日誌,都在nginx索引下:
在這裏插入圖片描述在這裏插入圖片描述
至此,配置完畢!

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章