搭建ELK日誌分析平臺 - 2020年7月最新版

1 需求

項目中使用Spring Cloud進行開發,需要查看每個模塊中打印的日誌。

2 解決

使用ELK完美解決。官網是這樣描述的:

ELK是3個開源項目的首字母縮寫,分別是ElasticSearch、LogStash和Kibana。

    ElasticSearch是一個搜索和分析引擎。

    LogStash是服務器端數據處理通道,能夠同時從多個來源採集數據,轉換數據,然後將數據發送到諸如ElasticSearch這樣的存儲庫中。

    Kibana則讓用戶在ElasticSearch中使用圖形和圖表對數據進行可視化。

    Beats是一個免費且開放的平臺,集合了多種單一用途(比如從文件中採集)數據採集器。它們從成百上千或成千上萬臺機器和系統向 Logstash 或 Elasticsearch 發送數據。 

其在官方的圖如下:可以看出LogStash和Beats作爲底層數據收集,然後傳遞給上層的ElasticSearch,最後使用Kibana可視化呈現。

3 搭建過程

這裏使用的安裝環境是CentOS7。

3.1 搭建ElasticSearch

3.1.1 下載及安裝

ElasticSearch包含多種安裝包格式,zip、deb、rpm、msi、docker鏡像、tar.gz等。

這裏,我們使用tar.gz進行演示:

# 下載elasticsearch的tar.gz壓縮包
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.7.1-linux-x86_64.tar.gz
# 解壓
tar -xzf elasticsearch-7.7.1-linux-x86_64.tar.gz
# 跳轉到elasticsearch安裝目錄
cd elasticsearch-7.7.1/

3.1.2 啓動ElasticSearch並驗證服務是否正常運行

ElasticSearch不能使用root用戶啓動,如果你用了,那麼就會得到如下異常:

    org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root

究其原因,是安全考慮。因爲ElasticSearch可以接收用戶輸入的腳本並執行。

所以必須先創建一個用戶,然後使用新建的用戶啓動ElasticSearch:

# 新建用戶
adduser elasticsearch
# 設置密碼,需要輸入兩次,這裏輸入的是elasticsearch(會提示bad password,因爲用戶名中包含密碼)
passwd elasticsearch
# 設置文件夾權限
chown -R elasticsearch elasticsearch-7.7.1

# 運行方式1:前臺運行方式啓動(直接打印日誌,按Ctrl+C就會停)
./bin/elasticsearch

# 方式2:後臺運行方式啓動(ps -ef|grep elasticsearch找到pid,然後kill pid實現停服務)
./bin/elasticsearch -d

# 方式3(推薦):後臺運行方式啓動並指定pid文件(在當前目錄下會生成一個名爲pid的文件,直接查看文件就能看到pid,和ps -ef|grep elasticsearch找到的pid是一樣的)
./bin/elasticsearch -d -p pid
# 停服務命令
# pkill -F pid

驗證服務是否啓動:

# 驗證ElasticSearch是否正常運行
curl localhost:9200

# 正常情況下會返回如下JSON信息
{
  "name" : "544b2c4b8707",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "_iKthvB9RAOMXUPjWDUNIA",
  "version" : {
    "number" : "7.7.1",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "ad56dce891c901a492bb1ee393f12dfff473a423",
    "build_date" : "2020-05-28T16:30:01.040088Z",
    "build_snapshot" : false,
    "lucene_version" : "8.5.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

3.1.3 常見問題

問題1:max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]

使用如下命令查看當前系統配置的值:

ulimit -Hn
ulimit -Sn

修改/etc/security/limits.conf文件,增加配置,用戶退出後重新登錄生效

*               soft    nofile          65535
*               hard    nofile          65535

問題2:max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

# 修改配置文件
vi /etc/sysctl.conf
# 添加如下配置
vm.max_map_count=262144
# 退出vi編輯器

# 使上述配置生效
sysctl -p

問題3:the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured

這個是在修改了elasticsearch.yml配置文件後出現的,我把ip和port放出來導致的。指定下cluster.initial_master_nodes即可解決:

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: my-es-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["node-1"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

# 跨域配置,這樣ElasticSearch-Header才能訪問到
http.cors.enabled: true
http.cors.allow-origin: "*"

3.2 搭建LogStash和FileBeat

LogStash管道包含兩個必要元素:input和output。還有一個是可選的元素:filter。

input插件從數據源接收輸入,filter對指定的數據進行修改,output將數據發送到目的地(比如ElasticSearch)。

3.2.1 配置FileBeat併發送日誌到Logstash

FileBeat是一個輕量級、資源友好型的工具,用來從各個虛擬機中的日誌文件中收集日誌併發送到LogStash中。

FileBeat的典型用法:在每一個需要收集日誌的服務器上都部署一個FileBeat實例。

具體安裝及配置如下:

# 下載
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.7.1-linux-x86_64.tar.gz
# 解壓
tar -zxvf filebeat-7.7.1-linux-x86_64.tar.gz
# 切換到filebeat目錄
cd filebeat-7.7.1-linux-x86_64
# 編輯filebeat配置文件
vi filebeat.yml

修改filebeat.yml如下:

# 配置filebeat的輸入爲log文件
filebeat.inputs:
- type: log
  paths:
    - /var/log/*.log 
# 配置filebeat的輸出爲logstash
output.logstash:
  hosts: ["localhost:5044"] 

啓動filebeat:

./filebeat -e -c filebeat.yml -d "publish"

3.2.2 安裝LogStash

# 下載
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.7.1.tar.gz
# 解壓
tar -zxvf logstash-7.7.1.tar.gz
# 跳轉到logstash目錄
cd logstash-7.7.1

新建配置文件logstash-filebeat.yml(可以直接從logstash-sample.yml複製過來):

# 輸入爲從beats輸入
input {
  beats {
    port => 5044
  }
}

# 暫時不修改任何數據
filter {
}

# 數據發送到ElasticSearch
output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    #user => "elastic"
    #password => "changeme"
  }
}

啓動LogStash:

nohup ./bin/logstash -f config/logstash-filebeat.conf &

3.3 搭建Kibana

# 下載
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.7.1-linux-x86_64.tar.gz
# 解壓kibana-7.7.1-linux-x86_64.tar.gz
tar -zxvf 
# 跳轉到kibana安裝目錄
cd kibana-7.7.1

修改配置kibana.yml:

# 端口
server.port: 5601
# 指定本機ip讓外部能訪問
server.host: "0.0.0.0"
# 請求數據指向的elasticsearch服務器
elasticsearch.hosts: ["http://localhost:9200"]

啓動Kibana:

# 新版本必須加--allow-root,否則無法使用root啓動
nohup ./kibana --allow-root &

訪問Kibana並查看ElasticSearch中的索引:前面使用的索引是"%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}",所以得到filebeat-7.7.1-2020.06.14

創建Kibana索引:Kibana下Index Patterns -> Create Index -> 輸入Index pattern -> Next step -> 選擇@timestamp

查看FileBeat中的日誌信息:

4 擴展

4.1 ElasticSearch-Header

ElasticSearch-Header是ElasticSearch集羣的Web前端,可以在頁面查看ElasticSearch集羣狀態並作相應的操作。

安裝步驟如下:

# 克隆代碼到CentOS
git clone git://github.com/mobz/elasticsearch-head.git
# 跳轉到elasticsearch-head目錄
cd elasticsearch-head
# 安裝相關的依賴
npm install
# 運行elasticsearch-head
npm run start

啓動完成後,還需要對ElasticSearch進行跨域配置:在elasticsearch.yml添加如下:

# 跨域配置,這樣ElasticSearch-Header才能訪問到
http.cors.enabled: true
http.cors.allow-origin: "*"

然後訪問:http://192.168.31.152:9100,輸入http://192.168.31.152:9200即可查看到ElasticSearch的運行狀況:

參考

1.[ELK Stack](https://www.elastic.co/cn/what-is/elk-stack)

2.[ElasticSearch安裝說明](https://www.elastic.co/guide/en/elasticsearch/reference/current/install-elasticsearch.html#install-elasticsearch)

3.[Linux下安裝ElasticSearch](https://www.elastic.co/guide/en/elasticsearch/reference/current/targz.html)

4.[docker安裝centos後沒有ifconfig命令解決辦法](https://blog.csdn.net/Magic_YH/article/details/51292095)

5.[elasticsearch不能使用root賬號運行](https://www.cnblogs.com/gcgc/p/10297563.html)

6.[LogStash官方文檔](https://www.elastic.co/guide/en/logstash/current/installing-logstash.html)

7.[LogStash使用FileBeat作爲輸入](https://www.elastic.co/guide/en/logstash/current/advanced-pipeline.html)

8.[Spring Cloud + ELK統一日誌系統搭建](https://blog.csdn.net/qq_34988304/article/details/100058049)

9.[elasticsearch啓動常見錯誤](https://www.cnblogs.com/zhi-leaf/p/8484337.html)

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章