ELFK日誌平臺入門5---Logstash+Filebeat集羣搭建

ELFK日誌平臺入門1---架構設計

ELFK日誌平臺入門2---Elasticseach集羣搭建   

ELFK日誌平臺入門3---Kibana搭建

ELFK日誌平臺入門4---Kafka集羣搭建

ELFK日誌平臺入門5---Logstash+Filebeat集羣搭建

這個章節我們介紹下logstash+filebeat集羣搭建。

 1、環境準備

 資源規劃:

 2、Logstash集羣部署

      下面操作均在三臺機器操作:

  • 解壓Logstash安裝包:
# tar zxf logstash-6.7.1.tar.gz && mv logstash-6.7.1/ /usr/local/logstash

# mkdir /usr/local/logstash/conf.d
  • 修改Logstash配置:
# vim /usr/local/logstash/config/logstash.yml

http.host: "192.168.0.0"                    #填本機ip
http.port: 9600
  • 配置Logstash服務:

       新增服務配置文件:

# vim /etc/default/logstash

LS_HOME="/usr/local/logstash"
LS_SETTINGS_DIR="/usr/local/logstash"
LS_PIDFILE="/usr/local/logstash/run/logstash.pid"
LS_USER="elk"
LS_GROUP="elk"
LS_GC_LOG_FILE="/usr/local/logstash/logs/gc.log"
LS_OPEN_FILES="16384"
LS_NICE="19"
SERVICE_NAME="logstash"
SERVICE_DESCRIPTION="logstash"

      新增服務文件:

# vim /etc/systemd/system/logstash.service

[Unit]
Description=logstash

[Service]
Type=simple
User=elk
Group=elk
# Load env vars from /etc/default/ and /etc/sysconfig/ if they exist.
# Prefixing the path with '-' makes it try to load, but if the file doesn't
# exist, it continues onward.
EnvironmentFile=-/etc/default/logstash
EnvironmentFile=-/etc/sysconfig/logstash
ExecStart=/usr/local/logstash/bin/logstash "--path.settings" "/usr/local/logstash/config" "--path.config" "/usr/local/logstash/conf.d"
Restart=always
WorkingDirectory=/
Nice=19
LimitNOFILE=16384

[Install]
WantedBy=multi-user.target

      管理服務:

# mkdir /usr/local/logstash/{run,logs} && touch /usr/local/logstash/run/logstash.pid

# touch /usr/local/logstash/logs/gc.log && chown -R elk:elk /usr/local/logstash

# systemctl daemon-reload

# systemctl enable logstash 

  3、Filebeat部署

  • 解壓Filebeat安裝包:
# tar zxf filebeat-6.2.4-linux-x86_64.tar.gz && mv filebeat-6.2.4-linux-x86_64 /usr/local/filebeat
  • 配置filebeat服務 :

       新增服務配置文件:

# vim /usr/lib/systemd/system/filebeat.service

[Unit]
Description=Filebeat sends log files to Logstash or directly to Elasticsearch.
Documentation=https://www.elastic.co/products/beats/filebeat
Wants=network-online.target
After=network-online.target

[Service]
ExecStart=/usr/local/filebeat/filebeat -c /usr/local/filebeat/filebeat.yml -path.home /usr/local/filebeat -path.config /usr/local/filebeat -path.data /usr/local/filebeat/data -path.logs /usr/local/filebeat/logs
Restart=always

[Install]
WantedBy=multi-user.target

      管理服務:

# mkdir /usr/local/filebeat/{data,logs}

# systemctl daemon-reload

# systemctl enable filebeat

 3、與Kafka結合

      這裏以收集/app/log/app.log目錄下日誌爲例:

  • 配置filebeat:
# vim /usr/local/filebeat/filebeat.yml

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /app/log/app.log                 #日誌路徑
  fields:
    log_topics: app-service-log
  
output.kafka:
  enabled: true
  hosts: ["192.168.0.0:9092","192.168.0.1:9092","192.168.0.2:9092"]
  topic: '%{[fields][log_topics]}'
  • Kafka集羣創建topic:
# /usr/local/kafka/bin/kafka-topics.sh --create --zookeeper 192.168.0.0:2181 --replication-factor 3 --partitions 1 --topic app-service-log

Created topic app-service-log.
  • 配置Logstash:
# vim /usr/local/logstash/conf.d/messages.conf

input {
    kafka {
        bootstrap_servers => "192.168.0.0:9092,192.168.0.1:9092,192.168.0.2:9092"
        group_id => "app-service"     #默認爲logstash
        topics => "app-service-log"
        auto_offset_reset => "latest"               #從最新的偏移量開始消費
        consumer_threads => 5               #消費的線程數
        decorate_events => true                #在輸出消息的時候回輸出自身的信息,包括:消費消息的大小、topic來源以及consumer的group信息
        type => "app-service"
    }
}

output {
    elasticsearch {
        hosts => ["192.168.0.0:9200","192.168.0.1:9200","192.168.0.2:9200"]
        index => "sys_messages.log-%{+YYYY.MM.dd}"
    }
}

      安裝logstash-input-kafka插件:

# chown -R elk:elk /usr/local/logstash

# /usr/local/logstash/bin/logstash-plugin install logstash-input-kafka
  • 啓動相關服務:
# systemctl start filebeat                  #啓動filebeat

# systemctl start logstash                  #啓動kafka
  • 瀏覽器訪問Kibana,新增索引查詢:

  4、Elasticsearch數據定時清理

      由於生產環境日誌量較大,長期存放Elasticsearch,會導致磁盤空間爆滿風險,那如何保留近兩天的日誌呢?

      這邊要採用crontab來支持,Linux crontab是用來定期執行程序的命令。

  • 創建sh可執行文件:
# vi /user/local/elasticsearch/es-index-clear.sh

#!/bin/bash
LAST_DATA=`date -d "-2 days" "+%Y.%m.%d"`                  #兩天前的時間
curl -XDELETE http://192.168.0.0:9200/*-${LAST_DATA}       #刪除es兩天前數據

# chmod 777 es-index-clear.sh
  •  配置執行時間、文件路徑:
# crontab -e

10 00 * * * /usr/local/elasticsearch/es-index-clear.sh  #每天00點10分執行
  • 重啓crontab服務:
# service crond restart

 至此ELFK+Kafka集羣已經搭建完成,開始體驗日誌分析平臺帶來的快捷。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章