ELK簡介及使用(二)

安裝kibana

由於上一篇中我們已經配置過yum源,這裏就不用再配置了,直接yum安裝即可,安裝命令如下,在主節點上安裝:

[root@master-node ~]# yum -y install kibana 

若yum安裝的速度太慢,可以直接下載rpm包來進行安裝:

[root@master-node ~]# wget https://artifacts.elastic.co/downloads/kibana/kibana-6.0.0-x86_64.rpm
[root@master-node ~]# rpm -ivh kibana-6.0.0-x86_64.rpm

安裝完成後,對kibana進行配置:

[root@master-node ~]# vim /etc/kibana/kibana.yml  # 增加以下內容
server.port: 5601  # 配置kibana的端口
server.host: 192.168.77.128  # 配置監聽ip
elasticsearch.url: "http://192.168.77.128:9200"  # 配置es服務器的ip,如果是集羣則配置該集羣中主節點的ip
logging.dest: /var/log/kibana.log  # 配置kibana的日誌文件路徑,不然默認是messages裏記錄日誌

創建日誌文件:

[root@master-node ~]# touch /var/log/kibana.log; chmod 777 /var/log/kibana.log

啓動kibana服務,並檢查進程和監聽端口:

[root@master-node ~]# systemctl start kibana
[root@master-node ~]# ps aux |grep kibana
kibana     3083 36.8  2.9 1118668 112352 ?      Ssl  17:14   0:03 /usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
root       3095  0.0  0.0 112660   964 pts/0    S+   17:14   0:00 grep --color=auto kibana
[root@master-node ~]# netstat -lntp |grep 5601
tcp        0      0 192.168.77.128:5601     0.0.0.0:*               LISTEN      3083/node    
[root@master-node ~]# 

注:由於kibana是使用node.js開發的,所以進程名稱爲node

然後在瀏覽器裏進行訪問,如:http://192.168.77.128:5601/ ,由於我們並沒有安裝x-pack,所以此時是沒有用戶名和密碼的,可以直接訪問的:
搭建ELK日誌分析平臺(下)—— 搭建kibana和logstash服務器

到此我們的kibana就安裝完成了,很簡單,接下來就是安裝logstash,不然kibana是沒法用的。


安裝logstash

在192.168.77.130上安裝logstash,但是要注意的是目前logstash不支持JDK1.9。

直接yum安裝,安裝命令如下:

[root@data-node1 ~]# yum install -y  logstash

如果yum源的速度太慢的話就下載rpm包來進行安裝:

[root@data-node1 ~]# wget https://artifacts.elastic.co/downloads/logstash/logstash-6.0.0.rpm
[root@data-node1 ~]# rpm -ivh logstash-6.0.0.rpm

安裝完之後,先不要啓動服務,先配置logstash收集syslog日誌:

[root@data-node1 ~]# vim /etc/logstash/conf.d/syslog.conf  # 加入如下內容
input {  # 定義日誌源
  syslog {
    type => "system-syslog"  # 定義類型
    port => 10514    # 定義監聽端口
  }
}
output {  # 定義日誌輸出
  stdout {
    codec => rubydebug  # 將日誌輸出到當前的終端上顯示
  }
}

檢測配置文件是否有錯:

[root@data-node1 ~]# cd /usr/share/logstash/bin
[root@data-node1 /usr/share/logstash/bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK  # 爲ok則代表配置文件沒有問題
[root@data-node1 /usr/share/logstash/bin]# 

命令說明:

  • --path.settings 用於指定logstash的配置文件所在的目錄
  • -f 指定需要被檢測的配置文件的路徑
  • --config.test_and_exit 指定檢測完之後就退出,不然就會直接啓動了

配置kibana服務器的ip以及配置的監聽端口:

[root@data-node1 ~]# vim /etc/rsyslog.conf
#### RULES ####

*.* @@192.168.77.130:10514

重啓rsyslog,讓配置生效:

[root@data-node1 ~]# systemctl restart rsyslog

指定配置文件,啓動logstash:

[root@data-node1 ~]# cd /usr/share/logstash/bin
[root@data-node1 /usr/share/logstash/bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
# 這時終端會停留在這裏,因爲我們在配置文件中定義的是將信息輸出到當前終端

打開新終端檢查一下10514端口是否已被監聽:

[root@data-node1 ~]# netstat -lntp |grep 10514
tcp6       0      0 :::10514                :::*                    LISTEN      4312/java 
[root@data-node1 ~]# 

然後在別的機器ssh登錄到這臺機器上,測試一下有沒有日誌輸出:

[root@data-node1 /usr/share/logstash/bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
{
          "severity" => 6,
               "pid" => "4575",
           "program" => "sshd",
           "message" => "Accepted password for root from 192.168.77.128 port 58336 ssh2\n",
              "type" => "system-syslog",
          "priority" => 86,
         "logsource" => "data-node1",
        "@timestamp" => 2018-03-03T18:12:27.000Z,
          "@version" => "1",
              "host" => "192.168.77.130",
          "facility" => 10,
    "severity_label" => "Informational",
         "timestamp" => "Mar  4 02:12:27",
    "facility_label" => "security/authorization"
}
{
          "severity" => 6,
           "program" => "systemd",
           "message" => "Started Session 42 of user root.\n",
              "type" => "system-syslog",
          "priority" => 30,
         "logsource" => "data-node1",
        "@timestamp" => 2018-03-03T18:12:27.000Z,
          "@version" => "1",
              "host" => "192.168.77.130",
          "facility" => 3,
    "severity_label" => "Informational",
         "timestamp" => "Mar  4 02:12:27",
    "facility_label" => "system"
}
{
          "severity" => 6,
           "program" => "systemd-logind",
           "message" => "New session 42 of user root.\n",
              "type" => "system-syslog",
          "priority" => 38,
         "logsource" => "data-node1",
        "@timestamp" => 2018-03-03T18:12:27.000Z,
          "@version" => "1",
              "host" => "192.168.77.130",
          "facility" => 4,
    "severity_label" => "Informational",
         "timestamp" => "Mar  4 02:12:27",
    "facility_label" => "security/authorization"
}
{
          "severity" => 6,
               "pid" => "4575",
           "program" => "sshd",
           "message" => "pam_unix(sshd:session): session opened for user root by (uid=0)\n",
              "type" => "system-syslog",
          "priority" => 86,
         "logsource" => "data-node1",
        "@timestamp" => 2018-03-03T18:12:27.000Z,
          "@version" => "1",
              "host" => "192.168.77.130",
          "facility" => 10,
    "severity_label" => "Informational",
         "timestamp" => "Mar  4 02:12:27",
    "facility_label" => "security/authorization"
}
{
          "severity" => 6,
           "program" => "systemd",
           "message" => "Starting Session 42 of user root.\n",
              "type" => "system-syslog",
          "priority" => 30,
         "logsource" => "data-node1",
        "@timestamp" => 2018-03-03T18:12:27.000Z,
          "@version" => "1",
              "host" => "192.168.77.130",
          "facility" => 3,
    "severity_label" => "Informational",
         "timestamp" => "Mar  4 02:12:27",
    "facility_label" => "system"
}
{
          "severity" => 6,
               "pid" => "4575",
           "program" => "sshd",
           "message" => "Received disconnect from 192.168.77.128: 11: disconnected by user\n",
              "type" => "system-syslog",
          "priority" => 86,
         "logsource" => "data-node1",
        "@timestamp" => 2018-03-03T18:12:35.000Z,
          "@version" => "1",
              "host" => "192.168.77.130",
          "facility" => 10,
    "severity_label" => "Informational",
         "timestamp" => "Mar  4 02:12:35",
    "facility_label" => "security/authorization"
}
{
          "severity" => 6,
               "pid" => "4575",
           "program" => "sshd",
           "message" => "pam_unix(sshd:session): session closed for user root\n",
              "type" => "system-syslog",
          "priority" => 86,
         "logsource" => "data-node1",
        "@timestamp" => 2018-03-03T18:12:35.000Z,
          "@version" => "1",
              "host" => "192.168.77.130",
          "facility" => 10,
    "severity_label" => "Informational",
         "timestamp" => "Mar  4 02:12:35",
    "facility_label" => "security/authorization"
}
{
          "severity" => 6,
           "program" => "systemd-logind",
           "message" => "Removed session 42.\n",
              "type" => "system-syslog",
          "priority" => 38,
         "logsource" => "data-node1",
        "@timestamp" => 2018-03-03T18:12:35.000Z,
          "@version" => "1",
              "host" => "192.168.77.130",
          "facility" => 4,
    "severity_label" => "Informational",
         "timestamp" => "Mar  4 02:12:35",
    "facility_label" => "security/authorization"
}

如上,可以看到,終端中以JSON的格式打印了收集到的日誌,測試成功。


配置logstash

以上只是測試的配置,這一步我們需要重新改一下配置文件,讓收集的日誌信息輸出到es服務器中,而不是當前終端:

[root@data-node1 ~]# vim /etc/logstash/conf.d/syslog.conf # 更改爲如下內容
input {
  syslog {
    type => "system-syslog"
    port => 10514
  }
}
output {
  elasticsearch {
    hosts => ["192.168.77.128:9200"]  # 定義es服務器的ip
    index => "system-syslog-%{+YYYY.MM}" # 定義索引
  }
}

同樣的需要檢測配置文件有沒有錯:

[root@data-node1 ~]# cd /usr/share/logstash/bin
[root@data-node1 /usr/share/logstash/bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK
[root@data-node1 /usr/share/logstash/bin]# 

沒問題後,啓動logstash服務,並檢查進程以及監聽端口:

[root@data-node1 ~]# systemctl start logstash
[root@data-node1 ~]# ps aux |grep logstash
logstash   5364  285 20.1 3757012 376260 ?      SNsl 04:36   0:34 /bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC -Djava.awt.headless=true -Dfile.encoding=UTF-8 -XX:+HeapDumpOnOutOfMemoryError -Xmx1g -Xms256m -Xss2048k -Djffi.boot.library.path=/usr/share/logstash/vendor/jruby/lib/jni -Xbootclasspath/a:/usr/share/logstash/vendor/jruby/lib/jruby.jar -classpath : -Djruby.home=/usr/share/logstash/vendor/jruby -Djruby.lib=/usr/share/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main /usr/share/logstash/lib/bootstrap/environment.rb logstas/runner.rb --path.settings /etc/logstash
root       5400  0.0  0.0 112652   964 pts/0    S+   04:36   0:00 grep --color=auto logstash

錯誤解決:

我這裏啓動logstash後,進程是正常存在的,但是9600以及10514端口卻沒有被監聽。於是查看logstash的日誌看看有沒有錯誤信息的輸出,但是發現沒有記錄日誌信息,那就只能轉而去查看messages的日誌,發現錯誤信息如下:
搭建ELK日誌分析平臺(下)—— 搭建kibana和logstash服務器

這是因爲權限不夠,既然是權限不夠,那就設置權限即可:

[root@data-node1 ~]# chown logstash /var/log/logstash/logstash-plain.log 
[root@data-node1 ~]# ll !$
ll /var/log/logstash/logstash-plain.log
-rw-r--r-- 1 logstash root 7597 Mar  4 04:35 /var/log/logstash/logstash-plain.log
[root@data-node1 ~]# systemctl restart logstash

設置完權限重啓服務之後,發現還是沒有監聽端口,查看logstash-plain.log文件記錄的錯誤日誌信息如下:
搭建ELK日誌分析平臺(下)—— 搭建kibana和logstash服務器

可以看到,依舊是權限的問題,這是因爲之前我們以root的身份在終端啓動過logstash,所以產生的相關文件的屬組屬主都是root,同樣的,也是設置一下權限即可:

[root@data-node1 ~]# ll /var/lib/logstash/
total 4
drwxr-xr-x 2 root root  6 Mar  4 01:50 dead_letter_queue
drwxr-xr-x 2 root root  6 Mar  4 01:50 queue
-rw-r--r-- 1 root root 36 Mar  4 01:58 uuid
[root@data-node1 ~]# chown -R logstash /var/lib/logstash/
[root@data-node1 ~]# systemctl restart logstash

這次就沒問題了,端口正常監聽了,這樣我們的logstash服務就啓動成功了:

[root@data-node1 ~]# netstat -lntp |grep 9600
tcp6       0      0 127.0.0.1:9600          :::*                    LISTEN      9905/java
[root@data-node1 ~]# netstat -lntp |grep 10514
tcp6       0      0 :::10514                :::*                    LISTEN      9905/java
[root@data-node1 ~]# 

但是可以看到,logstash的監聽ip是127.0.0.1這個本地ip,本地ip無法遠程通信,所以需要修改一下配置文件,配置一下監聽的ip:

[root@data-node1 ~]# vim /etc/logstash/logstash.yml
http.host: "192.168.77.130"
[root@data-node1 ~]# systemctl restart logstash
[root@data-node1 ~]# netstat -lntp |grep 9600
tcp6       0      0 192.168.77.130:9600     :::*                    LISTEN      10091/java          
[root@data-node1 ~]# 

kibana上查看日誌

完成了logstash服務器的搭建之後,回到kibana服務器上查看日誌,執行以下命令可以獲取索引信息:

[root@master-node ~]# curl '192.168.77.128:9200/_cat/indices?v'
health status index                 uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .kibana               6JfXc0gFSPOWq9gJI1ZX2g   1   1          1            0      6.9kb          3.4kb
green  open   system-syslog-2018.03 bUXmEDskTh6fjGD3JgyHcA   5   1         61            0    591.7kb        296.7kb
[root@master-node ~]# 

如上,可以看到,在logstash配置文件中定義的system-syslog索引成功獲取到了,證明配置沒問題,logstash與es通信正常。

獲取指定索引詳細信息:

[root@master-node ~]# curl -XGET '192.168.77.128:9200/system-syslog-2018.03?pretty'
{
  "system-syslog-2018.03" : {
    "aliases" : { },
    "mappings" : {
      "system-syslog" : {
        "properties" : {
          "@timestamp" : {
            "type" : "date"
          },
          "@version" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          },
          "facility" : {
            "type" : "long"
          },
          "facility_label" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          },
          "host" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          },
          "logsource" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          },
          "message" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          },
          "pid" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          },
          "priority" : {
            "type" : "long"
          },
          "program" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          },
          "severity" : {
            "type" : "long"
          },
          "severity_label" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          },
          "timestamp" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          },
          "type" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          }
        }
      }
    },
    "settings" : {
      "index" : {
        "creation_date" : "1520082481446",
        "number_of_shards" : "5",
        "number_of_replicas" : "1",
        "uuid" : "bUXmEDskTh6fjGD3JgyHcA",
        "version" : {
          "created" : "6020299"
        },
        "provided_name" : "system-syslog-2018.03"
      }
    }
  }
}
[root@master-node ~]#

如果日後需要刪除索引的話,使用以下命令可以刪除指定索引:

curl -XDELETE 'localhost:9200/system-syslog-2018.03'

es與logstash能夠正常通信後就可以去配置kibana了,瀏覽器訪問192.168.77.128:5601,到kibana頁面上配置索引:
搭建ELK日誌分析平臺(下)—— 搭建kibana和logstash服務器

我們也可以使用通配符,進行批量匹配:
搭建ELK日誌分析平臺(下)—— 搭建kibana和logstash服務器

配置成功後點擊 “Discover” :
搭建ELK日誌分析平臺(下)—— 搭建kibana和logstash服務器

進入 “Discover” 頁面後如果出現以下提示,則是代表無法查找到日誌信息:
搭建ELK日誌分析平臺(下)—— 搭建kibana和logstash服務器

這種情況一般是時間的問題,點擊右上角切換成查看當天的日誌信息:
搭建ELK日誌分析平臺(下)—— 搭建kibana和logstash服務器

這時應該就能夠正常查看了:
搭建ELK日誌分析平臺(下)—— 搭建kibana和logstash服務器

如果還是不行的話,就換幾個時間試試,換了幾個時間都不行的話,就在瀏覽器中直接訪問es服務器看看是否有反饋出信息:

http://192.168.77.128:9200/system-syslog-2018.03/_search?q=*

如下,這是正常返回信息的情況,如果有問題的話是會返回error的:
搭建ELK日誌分析平臺(下)—— 搭建kibana和logstash服務器

如果es服務器正常返回信息,但是 “Discover” 頁面卻依舊顯示無法查找到日誌信息的話,就使用另一種方式,進入設置刪除掉索引:
搭建ELK日誌分析平臺(下)—— 搭建kibana和logstash服務器
搭建ELK日誌分析平臺(下)—— 搭建kibana和logstash服務器
搭建ELK日誌分析平臺(下)—— 搭建kibana和logstash服務器

重新添加索引,但是這次不要選擇 @timestampe 了:
搭建ELK日誌分析平臺(下)—— 搭建kibana和logstash服務器

但是這種方式只能看到數據,沒有可視化的柱狀圖:
搭建ELK日誌分析平臺(下)—— 搭建kibana和logstash服務器

其實這裏顯示的日誌數據就是 /var/log/messages 文件裏的數據,因爲logstash裏配置的就是收集messages 文件裏的數據。

以上這就是如何使用logstash收集系統日誌,輸出到es服務器上,並在kibana的頁面上進行查看。


27.10 logstash收集nginx日誌

和收集syslog一樣,首先需要編輯配置文件,這一步在logstash服務器上完成:

[root@data-node1 ~]# vim /etc/logstash/conf.d/nginx.conf  # 增加如下內容
input {
  file {  # 指定一個文件作爲輸入源
    path => "/tmp/elk_access.log"  # 指定文件的路徑
    start_position => "beginning"  # 指定何時開始收集
    type => "nginx"  # 定義日誌類型,可自定義
  }
}
filter {  # 配置過濾器
    grok {
        match => { "message" => "%{IPORHOST:http_host} %{IPORHOST:clientip} - %{USERNAME:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:http_verb} %{NOTSPACE:http_request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" %{NUMBER:response} (?:%{NUMBER:bytes_read}|-) %{QS:referrer} %{QS:agent} %{QS:xforwardedfor} %{NUMBER:request_time:float}"}  # 定義日誌的輸出格式
    }
    geoip {
        source => "clientip"
    }
}
output {
    stdout { codec => rubydebug }
    elasticsearch {
        hosts => ["192.168.77.128:9200"]
        index => "nginx-test-%{+YYYY.MM.dd}"
  }
}

同樣的編輯完配置文件之後,還需要檢測配置文件是否有錯:

[root@data-node1 ~]# cd /usr/share/logstash/bin
[root@data-node1 /usr/share/logstash/bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/nginx.conf --config.test_and_exit
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK
[root@data-node1 /usr/share/logstash/bin]# 

檢查完畢之後,進入你的nginx虛擬主機配置文件所在的目錄中,新建一個虛擬主機配置文件:

[root@data-node1 ~]# cd /usr/local/nginx/conf/vhost/
[root@data-node1 /usr/local/nginx/conf/vhost]# vim elk.conf
server {
      listen 80;
      server_name elk.test.com;

      location / {
          proxy_pass      http://192.168.77.128:5601;
          proxy_set_header Host   $host;
          proxy_set_header X-Real-IP      $remote_addr;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      }

      access_log  /tmp/elk_access.log main2;
}

配置nginx的主配置文件,因爲需要配置日誌格式,在 log_format combined_realip 那一行的下面增加以下內容:

[root@data-node1 ~]# vim /usr/local/nginx/conf/nginx.conf
log_format main2 '$http_host $remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$upstream_addr" $request_time';

完成以上配置文件的編輯之後,檢測配置文件有沒有錯誤,沒有的話就reload重新加載:

[root@data-node1 ~]# /usr/local/nginx/sbin/nginx -t
nginx: [warn] conflicting server name "aaa.com" on 0.0.0.0:80, ignored
nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful
[root@data-node1 ~]# /usr/local/nginx/sbin/nginx -s reload
[root@data-node1 ~]#

由於我們需要在windows下通過瀏覽器訪問我們配置的 elk.test.com 這個域名,所以需要在windows下編輯它的hosts文件增加以下內容:

192.168.77.130 elk.test.com

這時在瀏覽器上就可以通過這個域名進行訪問了:
搭建ELK日誌分析平臺(下)—— 搭建kibana和logstash服務器

訪問成功後,查看生成的日誌文件:

[root@data-node1 ~]# ls /tmp/elk_access.log 
/tmp/elk_access.log
[root@data-node1 ~]# wc -l !$
wc -l /tmp/elk_access.log
45 /tmp/elk_access.log
[root@data-node1 ~]# 

如上,可以看到,nginx的訪問日誌已經生成了。

重啓logstash服務,生成日誌的索引:

systemctl restart logstash

重啓完成後,在es服務器上檢查是否有nginx-test開頭的索引生成:

[root@master-node ~]# curl '192.168.77.128:9200/_cat/indices?v' 
health status index                 uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .kibana               6JfXc0gFSPOWq9gJI1ZX2g   1   1          2            0     14.4kb          7.2kb
green  open   system-syslog-2018.03 bUXmEDskTh6fjGD3JgyHcA   5   1        902            0      1.1mb        608.9kb
green  open   nginx-test-2018.03.04 GdKYa6gBRke7mNgrh2PBUA   5   1         45            0      199kb         99.5kb
[root@master-node ~]# 

可以看到,nginx-test索引已經生成了,那麼這時就可以到kibana上配置該索引:
搭建ELK日誌分析平臺(下)—— 搭建kibana和logstash服務器
搭建ELK日誌分析平臺(下)—— 搭建kibana和logstash服務器
搭建ELK日誌分析平臺(下)—— 搭建kibana和logstash服務器

配置完成之後就可以在 “Discover” 裏進行查看nginx的訪問日誌數據了:
搭建ELK日誌分析平臺(下)—— 搭建kibana和logstash服務器


使用beats採集日誌

之前也介紹過beats是ELK體系中新增的一個工具,它屬於一個輕量的日誌採集器,以上我們使用的日誌採集工具是logstash,但是logstash佔用的資源比較大,沒有beats輕量,所以官方也推薦使用beats來作爲日誌採集工具。而且beats可擴展,支持自定義構建。

官方介紹:

https://www.elastic.co/cn/products/beats

在 192.168.77.134 上安裝filebeat,filebeat是beats體系中用於收集日誌信息的工具:

[root@data-node2 ~]# wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.0.0-x86_64.rpm
[root@data-node2 ~]# rpm -ivh  filebeat-6.0.0-x86_64.rpm

安裝完成之後編輯配置文件:

[root@data-node2 ~]# vim /etc/filebeat/filebeat.yml  # 增加或者更改爲以下內容
filebeat.prospectors:
- type: log
   #enabled: false 這一句要註釋掉
   paths:
      - /var/log/messages  # 指定需要收集的日誌文件的路徑

#output.elasticsearch:  # 先將這幾句註釋掉
  # Array of hosts to connect to.
#  hosts: ["localhost:9200"]

output.console:  # 指定在終端上輸出日誌信息
  enable: true

配置完成之後,執行以下命令,看看是否有在終端中打印日誌數據,有打印則代表filebeat能夠正常收集日誌數據:

[root@data-node2 ~]# /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml

以上的配置只是爲了測試filebeat能否正常收集日誌數據,接下來我們需要再次修改配置文件,將filebeat作爲一個服務啓動:

[root@data-node2 ~]# vim /etc/filebeat/filebeat.yml
#output.console:  把這兩句註釋掉
#  enable: true

# 把這兩句的註釋去掉
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["192.168.77.128:9200"]  # 並配置es服務器的ip地址

修改完成後就可以啓動filebeat服務了:

[root@data-node2 ~]# systemctl start filebeat
[root@data-node2 ~]# ps axu |grep filebeat
root       3021  0.3  2.3 296360 11288 ?        Ssl  22:27   0:00 /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat
root       3030  0.0  0.1 112660   960 pts/0    S+   22:27   0:00 grep --color=auto filebeat

啓動成功後,到es服務器上查看索引,可以看到新增了一個以filebeat-6.0.0開頭的索引,這就代表filesbeat和es能夠正常通信了:

[root@master-node ~]# curl '192.168.77.128:9200/_cat/indices?v' 
health status index                     uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   system-syslog-2018.03     bUXmEDskTh6fjGD3JgyHcA   5   1      73076            0     24.8mb         11.6mb
green  open   nginx-test-2018.03.04     GdKYa6gBRke7mNgrh2PBUA   5   1         91            0        1mb        544.8kb
green  open   .kibana                   6JfXc0gFSPOWq9gJI1ZX2g   1   1          3            0     26.9kb         13.4kb
green  open   filebeat-6.0.0-2018.03.04 MqQJMUNHS_OiVmO26NEWTw   3   1         66            0     64.5kb         39.1kb
[root@master-node ~]# 

es服務器能夠正常獲取到索引後,就可以到kibana上配置這個索引了:
搭建ELK日誌分析平臺(下)—— 搭建kibana和logstash服務器
搭建ELK日誌分析平臺(下)—— 搭建kibana和logstash服務器

以上這就是如何使用filebeat進行日誌的數據收集,可以看到配置起來比logstash要簡單,而且佔用資源還少。


擴展部分

集中式日誌分析平臺 - ELK Stack - 安全解決方案 X-Pack:

http://www.jianshu.com/p/a49d93212eca
https://www.elastic.co/subscriptions

Elastic stack演進:

http://70data.net/1505.html

基於kafka和elasticsearch,linkedin構建實時日誌分析系統:

http://t.cn/RYffDoE

elastic stack 使用redis作爲日誌緩衝:

http://blog.lishiming.net/?p=463

ELK+Filebeat+Kafka+ZooKeeper 構建海量日誌分析平臺:

https://www.cnblogs.com/delgyd/p/elk.html

關於elk+zookeeper+kafka 運維集中日誌管理:

https://www.jianshu.com/p/d65aed756587

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章