ELK安裝配置,日誌展示

ELK

首先要準備好ELK的安裝包:

jdk-8u162-linux-x64.rpm
elasticsearch-6.2.4.rpm
kibana-6.2.4-x86_64.rpm
logstash-6.2.4.rpm
filebeat-6.3.0-x86_64.rpm
# 我用的6.24的版本

安裝Elasticsearch

< 1  ELK-TEST - [root]: ~ > #  rpm -ivh jdk-8u162-linux-x64.rpm
< 2  ELK-TEST - [root]: ~ > #  rpm -ivh elasticsearch-6.2.4.rpm
< 3  ELK-TEST - [root]: ~ > #  systemctl start elasticsearch
< 4  ELK-TEST - [root]: ~ > #  vim /etc/elasticsearch/elasticsearch.yml
< 5  ELK-TEST - [root]: ~ > # grep -Pv "^(#|$)" /etc/elasticsearch/elasticsearch.yml 
cluster.name: elk
node.name: elk-node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
http.cors.enabled: true
http.cors.allow-origin: "*"

啓動測試

< 6  ELK-TEST - [root]: ~ > # systemctl restart elasticsearch
< 7  ELK-TEST - [root]: ~ > # lsof -i:9200
COMMAND    PID          USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
java      5924 elasticsearch  119u  IPv6  59918      0t0  TCP *:wap-wsp (LISTEN)
java      5924 elasticsearch  136u  IPv6  69396      0t0  TCP 192.168.1.41:wap-wsp->192.168.1.41:54884 (ESTABLISHED)

< 8  ELK-TEST - [root]: ~ > # curl -X GET http://192.168.1.41:9200
{

  "name" : "elk-node-1",
  "cluster_name" : "elk",
  "cluster_uuid" : "i4h5DhHbSzyQ9o0bFzjLZg",
  "version" : {
    "number" : "6.2.4",
    "build_hash" : "ccec39f",
    "build_date" : "2018-04-12T20:37:28.497551Z",
    "build_snapshot" : false,
    "lucene_version" : "7.2.1",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

插入一條數據

< 7  ELK-TEST - [root]: ~ > # curl -H "Content-Type: application/json" -XPOST '192.168.1.41:9200/customer/external/1?pretty' -d' {"name": "Fei Ba" }'
{
  "_index" : "customer",
  "_type" : "external",
  "_id" : "1",
  "_version" : 2,
  "result" : "updated",
  "_shards" : {
    "total" : 2,
    "successful" : 1,
    "failed" : 0
  },
  "_seq_no" : 1,
  "_primary_term" : 2
}

安裝head插件

(1)安裝NodeJS

wget https://npm.taobao.org/mirrors/node/latest-v4.x/node-v4.5.0-linux-x64.tar.gz
tar -zxvf node-v4.5.0-linux-x64.tar.gz  -C  /opt

配置環境變量,編輯/etc/profile添加
export NODE_HOME=/usr/local/node-v4.5.0-linux-x64
export PATH=PATH:NODE_HOME/bin/
export NODE_PATH=$NODE_HOME/lib/node_modules

source /etc/profile

(2)安裝npm

# npm install -g cnpm --registry=https://registry.npm.taobao.org

(3)使用npm安裝grunt

# npm install -g grunt
# npm install -g grunt-cli --registry=https://registry.npm.taobao.org --no-proxy

(4)版本確定

< 9  ELK-TEST - [root]: ~ > # node -v
v6.14.2
< 10  ELK-TEST - [root]: ~ > # npm -v
3.10.10
< 12  ELK-TEST - [root]: ~ > # grunt -version
grunt-cli v1.2.0

(5)下載head插件源碼

< 12  ELK-TEST - [root]: ~ > # wget https://github.com/mobz/elasticsearch-head/archive/master.zip                   #也可以在別的地方下載後上傳到服務器
unzip master.zip 

(6)下載依賴

進入elasticsearch-head-master目錄,執行下面命令
< 12  ELK-TEST - [root]: ~ > # < 11  ELK-TEST - [root]: /opt/elasticsearch-head-master > # npm install

在配置文件裏添加如下內容:

< 12  ELK-TEST - [root]: ~ > # tail -3 /etc/elasticsearch/elasticsearch.yml
http.cors.enabled: true
http.cors.allow-origin: "*"

< 12  ELK-TEST - [root]: ~ > # systemctl restart elasticsearch
< 12  ELK-TEST - [root]: ~ > # grunt server &         #後臺運行

訪問192.168.1.41:9100,如下:

1530684217839

安裝logstash

< 2  ELK-TEST - [root]: ~ > #  rpm -ivh logstash-6.2.4.rpm
< 40  ELK-TEST - [root]: ~ > # grep -Pv "^(#|$)" /etc/logstash/logstash.yml 
path.data: /var/lib/logstash
path.config: /etc/logstash/conf.d
path.logs: /var/log/logstash
根據默認配置,pipeline實例文件默認應放置於/etc/logstash/conf.d目錄,此時目錄下無實例文件,可根據實際情況新建實例,以處理本機messages信息爲例,如下:
< 41  ELK-TEST - [root]: ~ > # cd /etc/logstash/conf.d/
< 42  ELK-TEST - [root]: /etc/logstash/conf.d > # vim sqlsendmail.conf 
input {
    file {
        type => "trip-service"
        path => "/home/elk/trip-service-*.log"
        start_position => "beginning"
    }
}

filter {
  grok {
     match => { "message" => "\s*\[impl\]\[in\] traceId=%{NUMBER:traceId},reqTime=%{NUMBER:reqTime} req=%{GREEDYDATA:req}" }
  }
  if [priority] == "SQLerror" {    #是錯誤添加標記
     grok {
        tag_on_failure => "sqlerror"
     }  
  }  
  # 計數
  metrics {
      # 每60秒清空計數器
      clear_interval =>60
      # 每隔60秒統計一次
      flush_interval =>60
      # 計數器數據保存的字段名 priority的值默認就有是日誌的級別
      meter => "events_%{priority}"
      # 增加"error",作爲標記
      add_tag => "sqlerror"
      # 3秒內的error數據才統計,避免延遲
      gnore_older_than => 3 
  }   
  # 如果event中標記爲“error”的
  if "metrics" in [tags] {
      # 執行ruby代碼
      ruby {
          # 如果level爲warn的數量小於3條,就忽略此事件(即不發送任何消息)。
          code => "event.cancel if event['events_WARN']['count'] < 3"
      }   
  }   
}
output {
  # 如果event中標記爲“sqlerror”,表示出錯。
  if "error" in [tags]{
  # 通過emai發送郵件。
      email {
        to => "[email protected]"
        via => "smtp"
        port => 25
        username => "[email protected]"
        password => "dznxkqcnutnfbbji"
        subject => "[%{@timestamp} xxx服務器 日誌發現異常!]"
        body => "new bug ! %{message}"
        htmlbody => "%{message}"
      }
  }
  stdout { codec => rubydebug } # 標準輸出

注意目錄權限和屬主和屬組問題

< 42  ELK-TEST - [root]: /etc/logstash/conf.d > # cd ..
< 43  ELK-TEST - [root]: /etc/logstash > #  
 chown -R logstash:logstash conf.d/
< 44  ELK-TEST - [root]: /etc/logstash > # 
 chmod 644 /var/log/messages

啓動測試

< 45  ELK-TEST - [root]: ~ > # 
 cd /usr/share/logstash/
< 46  ELK-TEST - [root]: ~ > # 
bin/logstash -e 'input { stdin { } } output { stdout {} }'

WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs to console
# logstash可以啓動,但此種驗證方式會有告警,可以提示方式處理,在“$LS_HOME”下建立“config”目錄,並將”/etc/logstash/”下的文件建軟鏈接到“config”目錄,再次執行即可,如下:
< 47  ELK-TEST - [root]: ~ ># mkdir -p /usr/share/logstash/config/
< 48  ELK-TEST - [root]: ~ ># ln -s /etc/logstash/* /usr/share/logstash/config
< 49  ELK-TEST - [root]: ~ ># chown -R logstash:logstash /usr/share/logstash/config/
< 50  ELK-TEST - [root]: ~ ># bin/logstash -e 'input { stdin { } } output { stdout {} }'

安裝filebeat

< 34  ELK-TEST - [root]: /etc/filebeat > # grep -Pv "^( *#|$)" filebeat.yml 
filebeat.inputs:

- type: log
  enabled: true
  paths:
    - /var/log/*.log

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 3

setup.kibana:

output.elasticsearch:
  hosts: ["192.168.1.41:9200"]
logging:
    to_syslog: false
    to_files: true
    files:
        rotateeverybytes: 10485760     # 默認的10MB
        level: info

注意:Filebeat 6.0後,enabled默認爲關閉,如紅圈外,必須要修改成true,就是因爲這個是新版本。
paths:爲你想要抓取分析的日誌內容

img

如果直接將日誌發送到Elasticsearc,請編輯此行Elasticsearc output
如果直接將日誌發送到Logstash,請編輯此行Logstash output
只能使用一行輸出,其它的注掉即可

img

安裝kibana

< 63  ELK-TEST - [root]: ~ > #  rpm -ivh kibana-6.2.4-x86_64.rpm 
< 64  ELK-TEST - [root]: ~ > #  rpm -ql kibana |grep '/etc/'
< 65  ELK-TEST - [root]: ~ > #  cd /etc/kibana/
< 66  ELK-TEST - [root]: ~ > #  vim /etc/kibana/kibana.yml 
< 67  ELK-TEST - [root]: ~ > #  grep -Pv "^(#|$)" /etc/kibana/kibana.yml
server.port: 5601
server.host: "192.168.1.41"
elasticsearch.url: "http://192.168.1.41:9200"
< 68  ELK-TEST - [root]: ~ > # systemctl start kibana

1530683727526

上邊表示日誌推送成功。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章