八、日誌分析系統Nginx,Beats,Kibana和Logstash

@Author : By Runsen
@Date : 2020/6/19

作者介紹:Runsen目前大三下學期,專業化學工程與工藝,大學沉迷日語,Python, Java和一系列數據分析軟件。導致翹課嚴重,專業排名中下。.在大學60%的時間,都在CSDN。

在一月到四月都沒怎麼寫博客,因爲決定寫書,結果出書方說大學生就是一個菜鳥,看我確實還是一個菜鳥,就更新到博客算了。

我把第八章更新到博客上。

8.3 日誌分析系統

8.3.1 Nginx

Nginx是一個高性能的HTTP和反向代理web服務器,和lighttpd,apache命名爲三大web服務器。Nginx日誌分析系統通過Beats採集Nginx的指標數據和日誌數據,Beats採集到數據後發送到elasticsearch中Kibana讀取數據進行分析 ,查看分析報表。

在Centos7下,yum源不提供nginx的安裝,通過直接下載安裝包的方法,以下命令均需root權限執行:

[root@node01 ~] yum -y install pcre-devel zlib-devel
[root@node01 ~] wget http://nginx.org/download/nginx-1.14.2.tar.gz  
[root@node01 ~] tar -xvf nginx-1.14.2.tar.gz  
[root@node01 ~] cd nginx-1.14.2/
[root@node01 nginx-1.14.2] ./configure 
[root@node01 nginx-1.14.2] make install 
[root@node01 nginx-1.14.2] cd /usr/local/nginx/sbin/
[root@node01 sbin]./nginx
[root@node01 sbin]# curl 127.0.0.1:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@node01 ~] # 配置nginx開機啓動
[root@node01 ~] cd /lib/systemd/system
[root@node01 system] vim nginx.service
##########
[Unit]
Description=nginx 
After=network.target 

[Service] 
Type=forking 
ExecStart=/usr/local/nginx/sbin/nginx
ExecReload=/usr/local/nginx/sbin/nginx reload
ExecStop=/usr/local/nginx/sbin/nginx quit
PrivateTmp=true 

[Install] 
WantedBy=multi-user.target

[root@node01 system] systemctl enable nginx.service
[root@node01 system] systemctl start nginx.service

8.3.2 Beats

Beats是elastic公司開源的一款採集系統監控數據的代理agent,是在被監控服務器上以客戶端形式運行的數據收集 器的統稱,可以直接把數據發送給elasticsearch或者通過logstash發送給elasticsearch,然後進行後續的數據分析活動。

(1)filebeat

爲了同步elasticsearch版本,下載filebeat6.5.4版本,官方下載鏈接:https://www.elastic.co/cn/downloads/past-releases/filebeat-6-5-4

[root@node01 ~] mkdir /itcast/beats
[root@node01 ~] cd /itcast/beats
[root@node01 beats] wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.5.4-linux-x86_64.tar.gz
[root@node01 beats] tar -xvf filebeat-6.5.4-linux-x86_64.tar.gz
[root@node01 beats] cd  filebeat-6.5.4-linux-x86_64
[root@node01 beats filebeat-6.5.4-linux-x86_64] #創建配置文件 itcast.yml 
[root@node01 beats filebeat-6.5.4-linux-x86_64] vim itcast.yml
##########
filebeat.inputs: 
- type: stdin  
  enabled: true 
setup.template.settings:  
  index.number_of_shards: 3 
output.console: 
  pretty: true  
  enable: true 
  
#啓動
[root@node01 filebeat-6.5.4-linux-x86_64] filebeat ./filebeat -e -c itcast.yml
hello world
{
  "@timestamp": "2020-02-15T15:16:30.224Z",
  "@metadata": {
    "beat": "filebeat",
    "type": "doc",
    "version": "6.5.4"
  },
  "source": "",
  "offset": 0,
  "message": "hello world",
  "prospector": {
    "type": "stdin"
  },
  "input": {
    "type": "stdin"
  },
  "beat": {
    "name": "node01",
    "hostname": "node01",
    "version": "6.5.4"
  },
  "host": {
    "name": "node01"
  }
}

讀取Nginx日誌文件

[root@node01 beats filebeat-6.5.4-linux-x86_64] #創建配置文件 itcast-nginx.yml
[root@node01 beats filebeat-6.5.4-linux-x86_64] vim itcast-nginx.yml
##########
filebeat.inputs:
- type: log  
  enabled: true 
  paths:
    - /usr/local/nginx/logs/*.log
  tag: ["nginx"]
setup.template.settings:  
  index.number_of_shards: 3 
output.elasticsearch:
  hosts: ["192.168.92.90:9200","192.168.92.91:9200","192.168.92.92:9200"] 
  
#啓動 
[root@node01 filebeat-6.5.4-linux-x86_64] ./filebeat -e -c itcast-nginx.yml

啓動filebeat,可以看到每次訪問nginx的80端口,filebeat會將nginx的指標數據,寫入到了elasticsearch,如下圖8-29所示

filebeat默認配置了大量Module,通過./filebeat modules list查看filebeat可以寫入數據的類型

[root@node01 filebeat-6.5.4-linux-x86_64] ./filebeat modules list
Enabled:

Disabled:
apache2
auditd
elasticsearch
haproxy
icinga
iis
kafka
kibana
logstash
mongodb
mysql
nginx
osquery
postgresql
redis
suricata
system
traefik

filebeat不僅可以寫入nginx指標數據,同樣可以寫入數據庫到elasticsearch,更多Module教程,請查看官方文檔:https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-modules.html

(2)Metricbeat

除了filebeatMetricbeat也是用於收集指標。爲了同步elasticsearch版本,下載Metricbeat6.5.4版本,官方下載鏈接:https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-6.5.4-linux-x86_64.tar.gz

[root@node01 ~] cd /itcast/beats
[root@node01 beats] wget https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-6.5.4-linux-x86_64.tar.gz
[root@node01 beats] tar -xvf metricbeat-6.5.4-linux-x86_64.tar.gz 
[root@node01 beats] cd metricbeat-6.5.4-linux-x86_64
[root@node01 metricbeat-6.5.4-linux-x86_64] vim metricbeat.yml
########
metricbeat.config.modules:  
  path: ${path.config}/modules.d/*.yml  
  reload.enabled: false 
setup.template.settings:  
  index.number_of_shards: 1  
  index.codec: best_compression 
setup.kibana: 
output.elasticsearch:  
  hosts: ["192.168.92.90:9200","192.168.92.91:9200","192.168.92.92:9200"]
processors:  
  - add_host_metadata: ~  
  - add_cloud_metadata: ~ 
  
[root@node01 metricbeat-6.5.4-linux-x86_64] #啓動metricbeat
[root@node01 metricbeat-6.5.4-linux-x86_64] ./metricbeat -e

啓動metricbeatmetricbeat會將centos7系統的指標數據,寫入到了elasticsearch,如下圖8-30所示

metricbeat同樣配置了大量Module,通過./metricbeat modules list查看metricbeat可以寫入數據的類型

[root@node01 metricbeat-6.5.4-linux-x86_64] ./metricbeat modules list
Enabled:
system

Disabled:
aerospike
apache
ceph
couchbase
docker
dropwizard
elasticsearch
envoyproxy
etcd
golang
graphite
haproxy
http
jolokia
kafka
kibana
kubernetes
kvm
logstash
memcached
mongodb
munin
mysql
nginx
php_fpm
postgresql
prometheus
rabbitmq
redis
traefik
uwsgi
vsphere
windows
zookeeper

更多的metricbeat教程,請查看官方文檔:https://www.elastic.co/guide/en/beats/metricbeat/index.html

8.3 3 Kibana

Kibana 是一款開源的數據分析和可視化平臺,它是 Elastic Stack 成員之一,設計用於和 elasticsearch 協作。您可以 使用 Kibana 對 elasticsearch 索引中的數據進行搜索、查看、交互操作。可以很方便的利用圖表、表格及地圖對 數據進行多元化的分析和呈現。官網:https://www.elastic.co/cn/products/kibana

(1)安裝Kibana

這裏下載Kibana6.5.4版本:https://artifacts.elastic.co/downloads/kibana/kibana-6.5.4-linux-x86_64.tar.gz

[root@node01 ~] mkdir /itcast/kibana
[root@node01 ~] cd /itcast/kibana
[root@node01 kibana] wget https://artifacts.elastic.co/downloads/kibana/kibana-6.5.4-linux-x86_64.tar.gz
[root@node01 kibana] tar -xvf kibana-6.5.4-linux-x86_64.tar.gz 
[root@node01 kibana] cd kibana-6.5.4-linux-x86_64/
[root@node01 kibana-6.5.4-linux-x86_64] vim config/kibana.yml
#########
#對外暴露服務的地址 
server.host: "192.168.92.90"  
#配置Elasticsearch
elasticsearch.url: "http://192.168.92.90:9200"  
#x-pack安全認證
xpack.reporting.encryptionKey: "a_random_string"
xpack.security.encryptionKey: "something_at_least_32_characters"

[root@node01 kibana-6.5.4-linux-x86_64] bin/kibana

通過瀏覽器進行訪問 http://192.168.92.90:5601,訪問Kibana首頁,如下圖8-31所示

下面我們將使用Kibana儀表盤展現Centos7系統CPU和內存指標數據。

[root@node01 kibana] cd ..
[root@node01 itcast] cd beats/metricbeat-6.5.4-linux-x86_64/
[root@node01 metricbeat-6.5.4-linux-x86_64] vim metricbeat.yml
#########
setup.kibana:  
  host: "192.168.40.133:5601"
#修改metricbeat配置 
setup.kibana:  
  host: "192.168.92.90:5601  
  
[root@node01 metricbeat-6.5.4-linux-x86_64] ./metricbeat setup --dashboards
Loading dashboards (Kibana must be running and reachable)
Loaded dashboards
[root@node01 metricbeat-6.5.4-linux-x86_64] ./metricbeat -e

我們訪問Kibana,點擊儀表盤中的 ”系統指標概述“,如下圖8-32所示


我們可以看見Centos7機器的CPU和內存使用的指標情況,如下圖8-33所示

8.3.4 Logstash

Logstash是一個開源數據收集引擎,具有實時管道功能。Logstash可以動態地將來自不同數據源的數據統一起來,並將數據標準化到你所選擇的目的地。Logstash參考文檔:https://www.elastic.co/guide/en/logstash/current/index.html

(1) Logstash安裝

這裏下載logstash6.5.4版本:https://artifacts.elastic.co/downloads/logstash/logstash-6.5.4.tar.gz

[root@node01 ~] mkdir /itcast/logstash
[root@node01 ~] cd /itcast/logstash
[root@node01 logstash] wget https://artifacts.elastic.co/downloads/logstash/logstash-6.5.4.tar.gz
[root@node01 logstash] tar -xvf logstash-6.5.4.tar.gz
[root@node01 logstash] cd logstash-6.5.4/
[root@node01 logstash-6.5.4] bin/logstash -e 'input { stdin { } } output { stdout {} }'
hello
{
    "@timestamp" => 2020-02-16T10:44:54.593Z,
       "message" => "hello",
      "@version" => "1",
          "host" => "0.0.0.0"
}

(2)Logstash讀取數據

下面通過Logstash插入Movielens的數據集,下載地址:http://files.grouplens.org/datasets/movielens/ml-latest-small.zip

[root@node01 itcast] wget http://files.grouplens.org/datasets/movielens/ml-latest-small.zip
[root@node01 itcast] unzip ml-latest-small.zip
[root@node01 itcast] cd ml-latest-small/
[root@node01 ml-latest-small] ll
總用量 3236
-rw-r--r--. 1 root root  197979 9月  27 2018 links.csv
-rw-r--r--. 1 root root  494431 9月  27 2018 movies.csv
-rw-r--r--. 1 root root 2483723 9月  27 2018 ratings.csv
-rw-r--r--. 1 root root    8342 9月  27 2018 README.txt
-rw-r--r--. 1 root root  118660 9月  27 2018 tags.csv
[root@node01 ml-latest-small] pwd
/home/elsearch/itcast/ml-latest-small
[root@node01 ml-latest-small] cd ..
[root@node01 itcast] cd logstash/logstash-6.5.4/
[root@node01 logstash-6.5.4] vim logstash.conf
#########
input {
  file {
    path => "/home/elsearch/itcast/ml-latest-small/movies.csv"
    start_position => "beginning"
    sincedb_path => "/dev/null"
  }
}
#過濾格式化數據階段
filter {
  csv {
    separator => ","
    columns => ["id","content","genre"]
  }
  mutate {
    split => { "genre" => "|" }
    remove_field => ["path", "host","@timestamp","message"]
  }
  mutate {
    split => ["content", "("]
    add_field => { "title" => "%{[content][0]}"}
    add_field => { "year" => "%{[content][1]}"}
  }
  mutate {
    convert => {
      "year" => "integer"
    }
    strip => ["title"]
    remove_field => ["path", "host","@timestamp","message","content"]
  }
}
output {
   elasticsearch {
     hosts => "http://192.168.92.90:9200"
     index => "movies"
     document_id => "%{id}"
   }
  stdout {}
}
[root@node01 logstash-6.5.4] pwd
/home/elsearch/itcast/logstash/logstash-6.5.4
[root@node01 logstash-6.5.4] bin/logstash -f /home/elsearch/itcast/logstash/logstash-6.5.4/logstash.conf
# logstash運行在JVM上,啓動比較慢

我們可以在Advanced REST 查看插入成功的Movielens的數據集,如下圖8-34所示

Logstash還支持JDBC插件插入數據庫中的數據,具體參考Github倉庫:https://github.com/logstash-plugins/logstash-input-jdbc和官方教程:https://www.elastic.co/guide/en/logstash/current/plugins-inputs-jdbc.html

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章