一、ELK介紹
1、需求背景:隨着業務發展越來越大,服務器會越來越多,那麼,各種日誌量(比如,訪問日誌、應用日誌、錯誤日誌等)會越來越多。因此,開發人員排查問題,需要到服務器上查看日誌,很不方便。而運維人員也需要一些數據,所以也要到服務器分析日誌,很麻煩。
2、ELK Stack
ELK Stach從5.0版版本開始,改爲Elastic Stack(ELK Stach+Beats)
ELK Stack包含:ElasticSearch、Logstash、Kibana
ElasticSearch:是一個搜索引擎,用來搜索、分析、存儲日誌。它是分佈式的,也就是說可以橫向擴容,可以自動發現,索引自動分片,總之很強大。
Logstash:用 來 採 集日誌,把日誌解析爲json格式交給ElasticSearch。
Kibana:是一個數據可視化組件,把處理後的結果通過web界面展示。
Beats:是一個輕量級日誌採 集 器,其實Beats家族有5個成員。
早期的ELK架構中使用Logstash收集、解析日誌,但是Logstash對內存、cpu、io等資源消耗比較高。相比Logstash,Beats所佔系統的cpu和內存幾乎可以忽略不計。
x-pack對Elastic Stack提供了安全、警報、監控、報表、圖表於一身的擴展包,但是收費的。
二、ELK安裝準備工作
1、機器的規劃
準備3臺機子:
角色劃分:
(1)3臺機子都安裝elasticsearch(簡稱:es)
(2)3臺機子安裝jdk(可以使用yum安裝openjdk,也可以到甲骨文官網下載jdk安裝)
(3)1個主節點:129,2個數據節點:128、131
(4)主節點101上安裝kibana
(5)1臺數據節點安裝logstash+beats,比如在102機子上安裝
2、安裝jdk
在129機器上安裝jdk1.8。
下載jdk過程省略,下載地址:https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html。
安裝過程如下:
[root@hongwei-02 ~]# tar xf jdk-8u181-linux-x64.tar.gz -C /usr/local/
[root@hongwei-02~]# mv /usr/local/jdk1.8.0_181/ /usr/local/jdk1.8
[root@hongwei-02 ~]# echo -e "export JAVA_HOME=/usr/local/jdk1.8\nexport PATH=\$PATH:\$JAVA_HOME/bin\nexport CLASSPATH=\$JAVA_HOME/lib\n"> /etc/profile.d/jdk.sh
[root@hongwei-02 ~]# chmod +x /etc/profile.d/jdk.sh
[root@hongwei-02 ~]# source /etc/profile.d/jdk.sh
[root@hongwei-02 ~]#
執行一下java命令:
[root@hongwei-02 ~]# java -version
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)
OK,jdk安裝成功。
我們已經在192.168.93.129上安裝配置好jdk了。下面在129機子上使用expect和rsync同步jdk文件到128,131機子
[root@hongwei-02~]# vim ip.list
192.168.93.128
192.168.93.131
[root@hongwei-02 ~]# vim rsync.sh
#!/bin/bash
cat > rsync.expect <<EOF
#!/usr/bin/expect
set host [lindex \$argv 0]
set file [lindex \$argv 1]
spawn rsync -avr \$file root@\$host:/usr/local/
expect eof
EOF
file=$2
for host in cat $1
do
./rsync.expect $host $file
scp /etc/profile.d/jdk.sh root@$host:/etc/profile.d/
done
rm -f rsync.expect
[root@hongwei-02 ~]# ./rsync.sh ip.list /usr/local/jdk1.8
使用source命令
[root@lb01 ~]# ansible all -m shell -a "source /etc/profile.d/jdk.sh"
執行java相關命令:
[root@lb01 ~]# ansible all -m shell -a "java -version"
192.168.10.103 | SUCCESS | rc=0 >>
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)
192.168.10.102 | SUCCESS | rc=0 >>
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)
192.168.10.101 | SUCCESS | rc=0 >>
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)
[root@lb01 ~]#
OK,3臺機子的jdk安裝配置成功。
三、安裝es
3臺機子都要安裝es
rpm下載地址:https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.4.0.rpm
[root@lb01 ~]# curl -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.4.0.rpm
[root@lb01 ~]# rpm -ivh elasticsearch-6.4.0.rpm
warning: elasticsearch-6.4.0.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Preparing... ################################# [100%]
Creating elasticsearch group... OK
Creating elasticsearch user... OK
Updating / installing...
1:elasticsearch-0:6.4.0-1 ################################# [100%]
NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
You can start elasticsearch service by executing
sudo systemctl start elasticsearch.service
Created elasticsearch keystore in /etc/elasticsearch
[root@lb01 ~]#
可以配置yum源使用yum安裝:
[root@lb01 ~]# vim /etc/yum.repos.d/elk.repo
[elasticsearch]
name=Elasticsearch Repository for 6.x Package
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
enabled=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
將此repo文件發送到另外兩個節點即可。
[root@lb01 ~]# ansible 192.168.10.102,192.168.10.103 -m copy -a "src=/etc/yum.repos.d/elk.repo dest=/etc/yum.repos.d/"
192.168.10.102 | SUCCESS => {
"changed": true,
"checksum": "3a03d411370d2c7e15911fe4d62175158faad4a5",
"dest": "/etc/yum.repos.d/elk.repo",
"gid": 0,
"group": "root",
"md5sum": "53c8bd404275373a7529bd48b57823c4",
"mode": "0644",
"owner": "root",
"size": 195,
"src": "/root/.ansible/tmp/ansible-tmp-1536495999.48-252668322650802/source",
"state": "file",
"uid": 0
}
192.168.10.103 | SUCCESS => {
"changed": true,
"checksum": "3a03d411370d2c7e15911fe4d62175158faad4a5",
"dest": "/etc/yum.repos.d/elk.repo",
"gid": 0,
"group": "root",
"md5sum": "53c8bd404275373a7529bd48b57823c4",
"mode": "0644",
"owner": "root",
"size": 195,
"src": "/root/.ansible/tmp/ansible-tmp-1536495999.48-36568269955377/source",
"state": "file",
"uid": 0
}
[root@lb01 ~]#
四、配置es
1、配置es
elasticsearch的兩個配置文件:/etc/elasticsearch/elasticsearch.yml和/etc/sysconfig/elasticsearch
/etc/elasticsearch/elasticsearch.yml:配置集羣相關的配置文件
/etc/sysconfig/elasticsearch:es服務本身的配置文件。
101機子上的配置:
elasticsearch.yml修改或添加以下內容:
[root@lb01 ~]# cp /etc/elasticsearch/elasticsearch.yml{,.bak}
[root@lb01 ~]# vim /etc/elasticsearch/elasticsearch.yml
cluster.name: my-elk
node.name: my-test01
node.master: true
node.data: false
network.host: 192.168.10.101
discovery.zen.ping.unicast.hosts: ["192.168.10.101","192.168.10.102","192.168.10.103"]
解釋:
cluster.name: my-elk #集羣的名稱
node.name: my-test01 #節點的名稱
node.master: true #是否爲master(主節點),true:是,false:不是
node.data: false #是否是數據節點,false:不是,true:是
network.host: 192.168.10.101 #監聽的ip地址,如果是0.0.0.0,則表示監聽全部ip
discovery.zen.ping.unicast.hosts: ["192.168.10.101","192.168.10.102","192.168.10.103"] #定義自動發現的主機
102機子的elasticsearch.yml:
cluster.name: my-elk
node.name: my-test02
node.master: false
node.data: true
network.host: 192.168.10.102
discovery.zen.ping.unicast.hosts: ["192.168.10.101","192.168.10.102","192.168.10.103"]
103機子的elasticsearch.yml:
cluster.name: my-elk
node.name: my-test03
node.master: false
node.data: true
network.host: 192.168.10.103
discovery.zen.ping.unicast.hosts: ["192.168.10.101","192.168.10.102","192.168.10.103"]
102、103的配置文件注意紅色字體部分。
x-pack是收費的,所以不安裝了。
2、所有的機子/etc/sysconfig/elasticsearch文件添加java環境
因爲jdk安裝在:/usr/local/jdk1.8,所以要在/etc/sysconfig/elasticsearch文件中添加此路徑。3臺機子都要修改此文件。
[root@lb01 ~]# vim /etc/sysconfig/elasticsearch
JAVA_HOME=/usr/local/jdk1.8
3、停止防火牆
因爲是RHEL7.5版本,默認防火牆是firewalld
[root@lb01 ~]# ansible all -m shell -a "systemctl stop firewalld"
4、啓動es
先啓動主節點的es,再啓動其他節點的es,啓動命令:systemctl start elasticsearch.service
啓動完成後,查看一下進程:
[root@lb01 ~]# ansible ELK -m shell -a "ps aux | grep elas"
192.168.10.101 | SUCCESS | rc=0 >>
elastic+ 4167 48.4 63.5 3218372 1279184 ? Ssl 19:53 0:55 /usr/local/jdk1.8/bin/java -Xms1g
-Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch.iKVNygVa -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:/var/log/elasticsearch/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -Des.distribution.flavor=default -Des.distribution.type=rpm -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet
elastic+ 4217 0.0 0.2 72136 5116 ? Sl 19:53 0:00 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller
root 4249 66.7 1.8 403704 37832 pts/6 Sl+ 19:55 0:29 /usr/bin/python2 /usr/bin/ansible ELK -m shell -a ps aux | grep elas
root 4258 2.3 1.7 406852 35444 pts/6 S+ 19:55 0:00 /usr/bin/python2 /usr/bin/ansible ELK -m shell -a ps aux | grep elas
root 4260 6.0 1.8 408832 37688 pts/6 S+ 19:55 0:00 /usr/bin/python2 /usr/bin/ansible ELK -m shell -a ps aux | grep elas
root 4261 2.7 1.7 406852 35412 pts/6 S+ 19:55 0:00 /usr/bin/python2 /usr/bin/ansible ELK -m shell -a ps aux | grep elas
root 4324 0.0 0.0 113172 1208 pts/4 S+ 19:55 0:00 /bin/sh -c ps aux | grep elas
root 4326 0.0 0.0 112704 940 pts/4 R+ 19:55 0:00 grep elas
192.168.10.103 | SUCCESS | rc=0 >>
elastic+ 18631 60.6 73.4 3178368 732804 ? Ssl 19:53 1:17 /usr/local/jdk1.8/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch.NtMzlEoG -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:/var/log/elasticsearch/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -Des.distribution.flavor=default -Des.distribution.type=rpm -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet
elastic+ 18683 4.4 0.1 63940 1072 ? Sl 19:55 0:02 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller
root 18731 0.0 0.1 113172 1208 pts/0 S+ 19:56 0:00 /bin/sh -c ps aux | grep elas
root 18733 0.0 0.0 112704 940 pts/0 S+ 19:56 0:00 grep elas
192.168.10.102 | SUCCESS | rc=0 >>
elastic+ 51207 45.9 60.2 3131536 290424 ? Ssl 19:53 1:00 /usr/local/jdk1.8/bin/java -Xms1g
-Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch.tF106vFb -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:/var/log/elasticsearch/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -Des.distribution.flavor=default -Des.distribution.type=rpm -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet
elastic+ 51258 4.6 0.2 63940 1148 ? Sl 19:55 0:00 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller
root 51309 1.0 0.2 113172 1344 pts/0 S+ 19:56 0:00 /bin/sh -c ps aux | grep elas
root 51311 0.0 0.1 112704 936 pts/0 S+ 19:56 0:00 grep elas
[root@lb01 ~]#
OK,3臺機子都啓動成功。
五、curl查看es
1、查看集羣健康狀態
curl ' 192.168.10.101:9200 / _ cluster / health ? pretty '
[root@lb01 ~]# curl '192.168.10.101:9200/_cluster/health?pretty'
{
"cluster_name" : "my-elk",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 2,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
[root@lb01 ~]#
解釋:
status:狀態,green表示正常
2、查看集羣詳細信息
curl ' 192.168.10.101:9200 / _ cluster / state ? pretty '
[root@lb01 ~]# curl '192.168.10.101:9200/_cluster/state?pretty' | more
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{
"cluster_name" : "my-elk",
"compressed_size_in_bytes" : 9585,
"cluster_uuid" : "SPnC040UT6eo_o_a6hw9Hw",
"version" : 5,
"state_uuid" : "hH1LRd3RSuCQzyykx7ZIuQ",
"master_node" : "MT6Hvwv9Sziu1xBmNcl89g",
"blocks" : { },
"nodes" : {
"S1ArtroOTZuswzayKr9wmA" : {
"name" : "my-test02",
"ephemeral_id" : "S1EqQurVQLe9fDhaB4lf2Q",
"transport_address" : "192.168.10.102:9300",
"attributes" : {
"ml.machine_memory" : "493441024",
"ml.max_open_jobs" : "20",
"xpack.installed" : "true",
"ml.enabled" : "true"
}
},
"MT6Hvwv9Sziu1xBmNcl89g" : {
"name" : "my-test01",
"ephemeral_id" : "JdMZ9Up4QwGribVXcav9cA",
"transport_address" : "192.168.10.101:9300",
--More--
六、安裝kibana
1、主節點安裝kibana
Kibana安裝在主節點上。也就是安裝在192.168.10.101機子上。
下載地址: https ://artifacts.elastic.co/downloads/kibana/kibana-6.4.0-x86_64.rpm
[root@lb01 ~]# curl -O https://artifacts.elastic.co/downloads/kibana/kibana-6.4.0-x86_64.rpm
[root@lb01 ~]# rpm -ivh kibana-6.4.0-x86_64.rpm
在前面中已經配好了yum源,所以可以yum安裝,不過yum安裝會很慢。
2、配置文件
kibana的配置文件爲:/etc/kibana/kibana.yml
[root@lb01 ~]# vim /etc/kibana/kibana.yml
server.port: 5601
server.host: "192.168.10.101"
elasticsearch.url: "http://192.168.10.101:9200"
logging.dest: /var/log/kibana.log
創建存放日誌的文件:
[root@lb01 ~]# touch /var/log/kibana.log && chmod 777 /var/log/kibana.log
3、啓動kibana
[root@lb01 ~]# systemctl start kibana
[root@lb01 ~]# netstat -tnlnp | grep nod
tcp 0 0 192.168.10.101:5601 0.0.0.0:* LISTEN 5383/node
[root@lb01 ~]#
瀏覽器打開:192.168.10.101:5601
七、安裝logstash
1、下載安裝logstash
在任一數據節點安裝logstash,這裏在102機子上安裝。
注意:logstash不支持jdk1.9。
logstash下載地址:https://artifacts.elastic.co/downloads/logstash/logstash-6.4.0.rpm
[root@lb01 ~]# curl -O https://artifacts.elastic.co/downloads/logstash/logstash-6.4.0.rpm
[root@lb01 ~]# rpm -ivh logstash-6.4.0.rpm
2、配置logstash收集syslog日誌
[root@lb02 ~]# vim /etc/logstash/conf.d/syslog.conf
input {
syslog {
type => "system-syslog"
port => 10514
}
}
output {
stdout {
codec => rubydebug
}
}
在/usr/share/logstash/bin/logstash.lib.sh文件中添加JAVA_HOME變量值:
[root@lb02 ~]# vim /usr/share/logstash/bin/logstash.lib.sh
JAVA_HOME=/usr/local/jdk1.8
檢查配置文件是否有錯:
[root@lb02 ~]# cd /usr/share/logstash/bin
[root@lb02 bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2018-09-09T21:07:49,379][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/var/lib/logstash/queue"}
[2018-09-09T21:07:49,506][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/var/lib/logstash/dead_letter_queue"}
[2018-09-09T21:07:52,013][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
Configuration OK
[2018-09-09T21:08:01,228][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
[root@lb02 bin]#
修改系統日誌文件並重啓:
3、前臺啓動logstash
[root@lb02 bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf
。。。
[2018-09-09T21:13:59,328][INFO ][logstash.agent ] Pipelines running {:count=> 1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-09-09T21:14:02,092][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
回車之後,不會退出。可以按ctrl c結束。
八、配置logstash
修改一下/etc/logstash/conf.d/syslog.conf :
[root@lb02 ~]# vim /etc/logstash/conf.d/syslog.conf
input {
syslog {
type => "system-syslog"
port => 10514
}
}
output {
elasticsearch {
hosts => ["192.168.10.101:9200"]
index => "system-syslog-%{+YYYY.MM}"
}
}
修改監聽的host ip:
[root@lb02 ~]# vim /etc/logstash/logstash.yml
http.host: "192.168.10.102"
修改rsyslog:添加一行: . @ @192.168.10.102:10514
[root@lb02 ~]# vim /etc/rsyslog.conf
RULES
- . * @@192.168.10.102:10514
重啓rsyslog:systemctl restart rsyslog
啓動logstash:
[root@lb02 ~]# systemctl start logstash
查看端口:
logstash啓動很慢,要過一陣端口才能起來。
[root@lb02 ~]# netstat -tnl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:22 0.0.0.0: LISTEN
tcp 0 0 127.0.0.1:25 0.0.0.0: LISTEN
tcp6 0 0 127.0.0.1:9600 ::: LISTEN
tcp6 0 0 192.168.10.102:9200 ::: LISTEN
tcp6 0 0 :::10514 ::: LISTEN
tcp6 0 0 192.168.10.102:9300 ::: LISTEN
tcp6 0 0 :::22 ::: LISTEN
tcp6 0 0 ::1:25 ::: LISTEN
[root@lb02 ~]#
10514、9600端口已監聽。
九、kibana上查看日誌
前面中,kibana安裝在192.168.10.101,監聽端口:5601
查看索引:curl '192.168.10.101:9200/_cat/indices?v'
[root@lb01 ~]# curl '192.168.10.101:9200/_cat/indices?v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open system-syslog-2018.09 7kIi3UOtQR6DJaPmBZF1JQ 5 1 15 0 130.3kb 74.3kb
green open .kibana SAqQ2nPHQfWkHrx3uBb0zA 1 1 1 0 8kb 4kb
[root@lb01 ~]#
查看索引詳細信息:curl -XGET '192.168.10.101:9200/索引名稱?pretty'
[root@lb01 ~]# curl -XGET '192.168.10.101:9200/system-syslog-2018.09?pretty'
{
"system-syslog-2018.09" : {
"aliases" : { },
"mappings" : {
"doc" : {
"properties" : {
"@timestamp" : {
"type" : "date"
},
"@version" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
...
刪除索引:
參考資料:https://zhaoyanblog.com/archives/732.html
kibana查看日誌:
瀏覽器打開:192.168.10.101:5601
在左側點擊:Managerment - - Index Patterns - - Create Index Patterns
填寫索引的匹配模式,在前面創建了system - syslog - 2018.09索引
點擊下一步,按下圖的設置,然後創建。
創建成功如下:
點擊左側的discover:
就可以看到日誌信息了。
十、收集nginx日誌
1、在logstash主機(192.168.10.102)上配置logstash收集nginx日誌的配置文件。
nginx安裝在192.168.10.102機子上,安裝過程省略。
[root@lb02 ~]# vim /etc/logstash/conf.d/nginx.conf
input {
file {
path => "/tmp/elk_access.log"
start_position => "beginning"
type => "nginx"
}
}
filter {
grok {
match => {
"message" => "%{IPORHOST:http_host} %{IPORHOST:clientip} - %{USERNAME:remote_user} [%{HTTPDATE:timestamp}] \"(?:%{WORD:http_verb} %{NOTSPACE:http_request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" %{NUMBER:response} (?:%{NUMBER:bytes_read}|-) %{QS:referrer} %{QS:agent} %{QS:xforwardedfor} %{NUMBER:request_time:float}"
}
}
geoip {
source => "clientip"
}
}
檢查一下配置文件有沒錯:
[root@lb02 ~]# /usr/share/logstash/bin/logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/nginx.conf --config.test_and_exit
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2018-09-13T21:34:10,596][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
Configuration OK
[2018-09-13T21:34:21,798][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
[root@lb02 ~]#
重啓logstash:
[root@lb02 ~]# systemctl restart logstash
2、設置nignx虛擬主機
nginx安裝目錄:/usr/local/nginx
[root@lb02 ~]# vim /usr/local/nginx/conf.d/elk.conf
server {
listen 80;
server_name elk.localhost;
location / {
proxy_pass http://192.168.10.101:5601;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
access_log /tmp/elk_access.log main2;
}
3、修改nginx日誌格式
添加以下內容:
log_format main2 '$http_host $remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$upstream_addr" $request_time';
自定義的日誌格式
[root@lb02 ~]# vim /usr/local/nginx/conf/nginx.conf
log_format main2 '$http_host $remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$upstream_addr" $request_time';
啓動nginx:
[root@lb02 ~]# /usr/local/nginx/sbin/nginx
[root@lb02 ~]#
[root@lb02 ~]# netstat -tnlp | grep 80
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 3378/nginx: master
[root@lb02 ~]#
查看一下索引:
[root@lb01 ~]# curl '192.168.10.101:9200/_cat/indices ? v '
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open system-syslog-2018.09 7kIi3UOtQR6DJaPmBZF1JQ 5 1 188 0 894.8kb 465.4kb
green open .kibana SAqQ2nPHQfWkHrx3uBb0zA 1 1 2 0 22kb 11kb
green open nginx-test-2018.09.13 wGtUOYe0QJqpBE5iJbQVTQ 5 1 99 0 298.3kb 149.1kb
[root@lb01 ~]#
OK,nginx-test索 引已被獲取到了。配置成功。
物理機綁定host:
192.168.10.102 elk.localhost
瀏覽器打開:
跟前面的方法一樣,創建nginx的索引
查看nginx的日誌:
ok。
十 一、使用beats 採 集日誌
1、下載安裝filebeat
beats是輕量級的日誌採集工具。網址:https ://www.elastic.co/cn/products/beats
在192.168.101.103機子安裝filebeat。下載地址:
https ://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.0-x86_64.rpm
[root@rs01 ~]# rpm -ivh https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.0-x86_64.rpm
2、修改配置文件
註釋以下內容:
#enabled: false
#output.elasticsearch:
hosts: ["localhost:9200"]
添加以下內容:
output.console:
enable: true
修改paths:
paths:
- /var/log/messages
[root@rs01 ~]# vim /etc/filebeat/filebeat.yml
查看一下:
[root@rs01 ~]# /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml
{"@timestamp":"2018-09-13T14:46:50.390Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.4.0"},"source":"/var/log/messages","offset":709995,"message":"Sep 13 22:46:48 rs01 systemd-logind: Removed session 10.","prospector":{"type":"log"},"input":{"type":"log"},"host":{"name":"rs01"},"beat":{"name":"rs01","hostname":"rs01","version":"6.4.0"}}
{"@timestamp":"2018-09-13T14:46:57.446Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.4.0"},"host":{"name":"rs01"},"message":"Sep 13 22:46:56 rs01 systemd: Started Session 11 of user root.","source":"/var/log/messages","offset":710052,"prospector":{"type":"log"},"input":{"type":"log"},"beat":{"name":"rs01","hostname":"rs01","version":"6.4.0"}}
{"@timestamp":"2018-09-13T14:46:57.523Z","@metadata":
messages的日誌輸出到屏幕了。
上面設置的是輸出messages日誌,下面修改爲:/var/log/elasticsearch/my-elk.log
當然,你可以設置/var/log/目錄中的任何一個日誌文件。把前面的
output.console:
enable: true
註釋掉,設置:
output.elasticsearch:
hosts: ["192.168.93.129:9200"]
[root@rs01 ~]# vim /etc/filebeat/filebeat.yml
#=========================== Filebeat inputs =============================
filebeat.inputs:
Each - is an input. Most options can be set at the input level, so
you can use different inputs for various configurations.
Below are the input specific configurations.
-
type: log
Change to true to enable this input configuration.
#enabled: true
Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/elasticsearch/my-elk.log
......
#output.console:
enable: true
#================================ Outputs =====================================
Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
Array of hosts to connect to.
hosts: ["192.168.10.101:9200"]
修改完成之後,啓動filebeat
[root@hongwei-02 ~]# systemctl start filebeat
[root@rs01 ~]#
查看es的索引:
[root@lb01 ~]# curl '192.168.10.101:9200/_cat/indices ? v '
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open filebeat-6.4.0-2018.09.13 hJ92dpWGShq2J8uI8hN7pQ 3 1 980 0 484.8kb 223.8kb
green open nginx-test-2018.09.13 wGtUOYe0QJqpBE5iJbQVTQ 5 1 25779 0 12.1mb 6.1mb
green open system-syslog-2018.09 7kIi3UOtQR6DJaPmBZF1JQ 5 1 25868 0 12.7mb 6.3mb
green open .kibana SAqQ2nPHQfWkHrx3uBb0zA 1 1 3 0 36.1kb 18kb
[root@lb01 ~]#
OK,es能採集到filebeat。
那麼可以在kibana中創建索引了。
相比logstash,filebeat的設置簡單多了。