利用ELK分析MySQL慢日誌(更新中)

環境介紹

ES node1:192.168.237.25

ES node2:192.168.237.26

ES node3:192.168.237.27

Redis、Logstash、Kibana:192.168.237.30

MySQL Node:192.168.237.9

 

一、Filebeat

由於需要收集數據庫慢日誌,需要在所在服務器安裝filebeat

登錄192.168.237.9

1.1、安裝filebeat

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.6.2-x86_64.rpm

sudo rpm -vi filebeat-7.6.2-x86_64.rpm

rpm -qc filebeat 查看配置文件路徑

 

1.2、配置filebeat

vim /etc/filebeat/filebeat.yml

修改:enabled、paths、output.redis

#=========================== Filebeat inputs =============================

filebeat.inputs:

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /data/mysql/slow.log  #日誌源

#----------------------------- Elasticsearch output --------------------------------
output.redis:
  hosts: ["192.168.237.30:6379"]  #redis 地址
  key: "mysql-slowlog"  #redis key
  password: 123456  #redis 密碼

 

二、Redis

2.1、安裝redis

登錄192.168.237.30

wget http://download.redis.io/releases/redis-5.0.8.tar.gz

yum install gcc

tar zxvf redis-5.0.8.tar.gz

cd redis-5.0.8

make

make install PREFIX=/usr/local/redis

mkdir /usr/local/redis/etc

cp redis.conf /usr/local/redis/etc

修改/usr/local/redis/etc/redis.conf

bind 0.0.0.0  #開放遠程登錄
port 6379
timeout 120  #閒置連接超時時間
daemonize yes  #實現後臺運行
dir /data/redis  #dump.rdb和appendonly.aof文件存放路徑
requirepass 123456  #redis密碼
appendonly yes  #開啓aof

ln -s /usr/local/redis/bin/redis-server /usr/local/sbin/

ln -s /usr/local/redis/bin/redis-cli /usr/local/sbin/

開發默認端口6379

運行(配若置文件有更新,直接再次運行,就可以重載配置。注:修改密碼依然需要kill redis進程再重啓)
redis-server /usr/local/redis/etc/redis.conf

登錄,輸入密碼:

# redis-cli

127.0.0.1:6379> AUTH 123456

 

2.2、驗證filebeat與redis的連通性

回到客戶端

step1、systemctl start filebeat

若不能啓動,根據:cat /var/log/messages |tail -n 200輸出結果修改filebeat或redis配置文件

step2、redis-cli -h 192.168.237.30,測試是能夠登錄,否則檢查redis配置,執行:key *,觀察輸出結果是否顯示:"mysql-slowlog"

 

三、Logstash

3.1、安裝

安裝JDK,安裝方法見第四章ES第一節

安裝logstash

下載地址:https://www.elastic.co/cn/downloads/logstash

mkdir /usr/local/logstash

tar zxvf logstash-7.6.2.tar.gz -C /usr/local/logstash --strip-components 1

測試:

[root@ceshi23730 config]# /usr/local/logstash/bin/logstash  -e 'input { stdin { } } output { stdout {} }'
Sending Logstash logs to /usr/local/logstash/logs which is now configured via log4j2.properties
[2020-04-17T14:42:46,293][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-04-17T14:42:46,534][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.6.2"}
[2020-04-17T14:42:48,362][INFO ][org.reflections.Reflections] Reflections took 34 ms to scan 1 urls, producing 20 keys and 40 values 
[2020-04-17T14:42:49,549][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][main] A gauge metric of an unknown type (org.jruby.RubyArray) has been created for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2020-04-17T14:42:49,569][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["config string"], :thread=>"#<Thread:0x7531b408 run>"}
[2020-04-17T14:42:50,290][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
The stdin plugin is now waiting for input:
[2020-04-17T14:42:50,437][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2020-04-17T14:42:50,692][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
輸入:test
/usr/local/logstash/vendor/bundle/jruby/2.5.0/gems/awesome_print-1.7.0/lib/awesome_print/formatters/base_formatter.rb:31: warning: constant ::Fixnum is deprecated
{
    "@timestamp" => 2020-04-17T06:43:08.196Z,
       "message" => "test",
      "@version" => "1",
          "host" => "ceshi23730"
}

 

3.2、配置

/usr/local/logstash/config/logstash-simple.conf

input {
  redis {
    host => "192.168.237.30"
    port => 6379
    password => "123456"
    db => 2
    key => "mysql-slowlog"
    data_type => "list"
    batch_count => 1
  }
}

output {
  elasticsearch {
    hosts => ["http://192.168.237.25:9200"]
    #index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    index => "%logstash-%{+YYYY.MM.dd}"
    #user => "elastic"
    #password => "changeme"
  }
}


啓動:

nohup /usr/local/logstash/bin/logstash -f /usr/local/logstash/config/logstash-sample.conf 2>&1 >/dev/null &

關閉:

kill -TERM {logstash_pid}

 

四、ES

3個ES節點分別安裝JDK、elasticsearch

4.1、JDK

rpm -qa |grep open-jdk,查找並刪除open-jdk

安裝阿里JDK爲例:

wget https://github.com/alibaba/dragonwell8/releases/download/dragonwell-8.3.3-GA/Alibaba_Dragonwell_8.3.3-GA_Linux_x64.tar.gz

mkdir /usr/local/Alibaba_Dragonwell_8.3.3

tar zxvf Alibaba_Dragonwell_8.3.3-GA_Linux_x64.tar.gz -C /usr/local/Alibaba_Dragonwell_8.3.3 --strip-components 1

chown -R root:root /usr/local/Alibaba_Dragonwell_8.3.3/

修改環境變量

export JAVA_HOME=/usr/local/Alibaba_Dragonwell_8.3.3

export PATH=${JAVA_HOME}/bin:$PATH

執行 java -version 進行校驗無誤後寫入/etc/profile

source /etc/profile

 

4.2、安裝elasticsearch(3節點)

下載地址:https://www.elastic.co/cn/downloads/elasticsearch

mkdir /usr/local/elasticsearch

tar zxvf elasticsearch-7.6.2-linux-x86_64.tar.gz -C /usr/local/elasticsearch --strip-components 1

groupadd elasticsearch

useradd elasticsearch -g elasticsearch

passwd elasticsearch

chown -R elasticsearch:elasticsearch /usr/local/elasticsearch/

mkdir /data/elasticsearch-data

mkdir /data/elasticsearch-logs

chown -R elasticsearch:elasticsearch /data/elasticsearch*

 

4.3、配置

4.3.1、系統配置

(1)修改 /etc/security/limits.conf
* soft nofile 65535
* hard nofile 65535
* soft nproc 32000
* hard nproc 32000

(2)修改 /etc/sysctl.conf
vm.max_map_count=655300

重載配置:sysctl -p

上述系統配置完成後,重啓服務器,ulimit -a 確認:open files、max user processes。一般可以解決後續啓動ES的報錯:

[elasticsearch@ceshi23725 elasticsearch]$ ERROR: [2] bootstrap checks failed
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
ERROR: Elasticsearch did not exit normally - check the logs at /data/elasticsearch-logs/es-cluster.log
 

4.3.2、elasticsearch master節點配置

修改配置文件:/usr/local/elasticsearch/config/elasticsearch.yml

cluster.name: es-cluster
node.name: node-1

# 候選master節點
node.master: true
node.data: false

path.data: /data/elasticsearch-data
path.logs: /data/elasticsearch-logs
network.host: 0.0.0.0
http.port: 9200
transport.tcp.port: 9300  # 節點間交互端口

http.cors.enabled: true  # 支持跨域訪問
http.cors.allow-origin: "*"  # 允許所有域名訪問

cluster.initial_master_nodes: ["192.168.237.25"]  # 指定可以作爲master角色的節點清單,用於首次選舉,可指定多臺,用,分隔

 

4.3.3、elasticsearch data節點配置

修改配置文件:/usr/local/elasticsearch/config/elasticsearch.yml

cluster.name: es-cluster
node.name: node-2

# 數據節點
node.master: false
node.data: true

path.data: /data/elasticsearch-data
path.logs: /data/elasticsearch-logs
network.host: 0.0.0.0
http.port: 9200
transport.tcp.port: 9300  # 節點間交互端口

http.cors.enabled: true  # 支持跨域訪問
http.cors.allow-origin: "*"  # 允許所有域名訪問

discovery.seed_hosts: ["192.168.237.25:9300"]  # 指定可以作爲master角色的候選清單

 

4.3.4、啓動

(1)啓動master節點

su elasticsearch

cd /usr/local/elasticsearch

./bin/elasticsearch -d -p /data/elasticsearch-data/elasticsearch.pid

(2)啓動兩個數據節點,同上

(3)校驗

ES集羣基礎信息:

[root@ceshi23709 ~]# curl -GET http://192.168.237.25:9200
{
  "name" : "node-1",
  "cluster_name" : "es-cluster",
  "cluster_uuid" : "cU0D3TDWQT--JS2WJ_IEeg",
  "version" : {
    "number" : "7.6.2",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f",
    "build_date" : "2020-03-26T06:34:37.794943Z",
    "build_snapshot" : false,
    "lucene_version" : "8.4.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

健康度:

[root@ceshi23709 ~]# curl -GET http://192.168.237.25:9200/_cluster/health?pretty
{
  "cluster_name" : "es-cluster",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

(4)關閉

pkill -F /data/elasticsearch-data/elasticsearch.pid

 

4.3.5、測試數據鏈路連通性

[root@ceshi23730 redis]# curl http://192.168.237.25:9200/_search?pretty
{
  "took" : 9,
  "timed_out" : false,
  "_shards" : {
    "total" : 4,
    "successful" : 4,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 99,
      "relation" : "eq"
    },
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : "%logstash-2020.04.16",
        "_type" : "_doc",
        "_id" : "Gf2Mh3EBQtJt23-0Q7E-",
        "_score" : 1.0,
        "_source" : {
          "message" : "./bin/mysqld, Version: 5.7.28-31-31.41-log (Percona XtraDB Cluster binary (GPL) 5.7.28-31.41, Revision ef2fa88, wsrep_31.41). started with:",
          "log" : {
            "offset" : 0,
            "file" : {
              "path" : "/data/mysql/slow.log"
            }
          },
          "@version" : "1",
          "host" : {
            "containerized" : false,
            "id" : "5188e09f2c0d47b2ad736027bcd0f083",
            "name" : "ceshi23709",
            "architecture" : "x86_64",
            "os" : {
              "platform" : "centos",
              "version" : "7 (Core)",
              "kernel" : "3.10.0-862.el7.x86_64",
              "family" : "redhat",
              "codename" : "Core",
              "name" : "CentOS Linux"
            },
            "hostname" : "ceshi23709"
          },
          "input" : {
            "type" : "log"
          },
          "ecs" : {
            "version" : "1.4.0"
          },
          "@timestamp" : "2020-04-16T06:35:56.108Z",
          "agent" : {
            "version" : "7.6.2",
            "ephemeral_id" : "70273e71-6179-409c-b912-9fd46a427367",
            "type" : "filebeat",
            "id" : "4ba9aad7-7b72-49d0-86d3-d8f0106c0a71",
            "hostname" : "ceshi23709"
          }
        }
      },
      {
        "_index" : "%logstash-2020.04.16",
        "_type" : "_doc",
        "_id" : "L_2Mh3EBQtJt23-0Q7E-",
        "_score" : 1.0,
        "_source" : {
          "ecs" : {
            "version" : "1.4.0"
          },
          "log" : {
            "offset" : 192,
            "file" : {
              "path" : "/data/mysql/slow.log"
            }
          },
          "@version" : "1",
          "host" : {
            "containerized" : false,
            "id" : "5188e09f2c0d47b2ad736027bcd0f083",
            "os" : {
              "version" : "7 (Core)",
              "kernel" : "3.10.0-862.el7.x86_64",
              "platform" : "centos",
              "family" : "redhat",
              "name" : "CentOS Linux",
              "codename" : "Core"
            },
            "name" : "ceshi23709",
            "architecture" : "x86_64",
            "hostname" : "ceshi23709"
          },
          "input" : {
            "type" : "log"
          },
          "message" : "Time                 Id Command    Argument",
          "@timestamp" : "2020-04-16T06:35:56.108Z",
          "agent" : {
            "ephemeral_id" : "70273e71-6179-409c-b912-9fd46a427367",
            "version" : "7.6.2",
            "type" : "filebeat",
            "id" : "4ba9aad7-7b72-49d0-86d3-d8f0106c0a71",
            "hostname" : "ceshi23709"
          }
        }
      },

 

五、kibana

5.1、安裝

下載地址:https://www.elastic.co/cn/downloads/kibana

mkdir /usr/local/kibana

tar zxvf kibana-7.6.2-linux-x86_64.tar.gz -C /usr/local/kibana --strip-components 1

5.2、配置

/usr/local/kibana/config/kibana.yml

server.port: 5601
server.host: "192.168.237.30"
server.name: "kibana"
elasticsearch.hosts: ["http://192.168.237.25:9200"]  # ES master節點
kibana.index: ".kibana"
pid.file: /usr/local/kibana/data/kibana.pid

 

5.3、啓動

nohup /usr/local/kibana/bin/kibana --allow-root >> /usr/local/kibana/data/kibana.log &

訪問:http://192.168.237.30:5601/status

關閉:pkill -F /usr/local/kibana/data/kibana.pid

 

參考文檔

使用Oracle JDK嫌官方需要註冊下載的,可以使用華爲的源(jdk版本較低)

Alibaba Dragonwell8 JDK Github地址

Elasticsearch APIs

使用ELK(Elasticsearch + Logstash + Kibana) 搭建日誌集中分析平臺實踐

刨根問底 | Elasticsearch 5.X集羣多節點角色配置深入詳解

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章