文章目錄
安裝使用ELK6.0
1. Elasticsearch安裝準備工作
準備3臺機器,這樣才能完成分佈式集羣的實驗,當然能有更多機器更好:
- 192.168.1.17
- 192.168.1.18
- 192.168.1.20
角色劃分:
- 3臺機器全部安裝jdk1.8,因爲elasticsearch是java開發的
- 3臺全部安裝elasticsearch (後續都簡稱爲es)
- 192.168.1.18作爲主節點
- 192.168.1.17以及192.168.1.18作爲數據節點
- 主節點上需要安裝kibana
- 在192.168.1.17上安裝 logstash
ELK版本信息:
- Elasticsearch-6.0.0
- logstash-6.0.0
- kibana-6.0.0
- filebeat-6.0.0
然後三臺機器都得關閉防火牆或清空防火牆規則。
配置三臺機器的hosts文件內容如下:
$ vim /etc/hosts
192.168.1.17 master-node
192.168.1.10 lb-node1
192.168.1.11 lb-node2
三臺主機安裝es
## 清華大學的源,下載比較快
wget https://mirrors.tuna.tsinghua.edu.cn/elasticstack/6.x/yum/6.0.0/elasticsearch-6.0.0.rpm
rpm -ivh elasticsearch-6.0.0.rpm
配置es
[root@master-node ~]# ll /etc/elasticsearch/
total 16
-rw-rw---- 1 root elasticsearch 2870 Nov 11 2017 elasticsearch.yml
-rw-rw---- 1 root elasticsearch 2678 Nov 11 2017 jvm.options
-rw-rw---- 1 root elasticsearch 5091 Nov 11 2017 log4j2.properties
[root@master-node ~]# ll /etc/sysconfig/elasticsearch
-rw-rw---- 1 root elasticsearch 1593 Nov 11 2017 /etc/sysconfig/elasticsearch
[root@master-node ~]#
elasticsearch.yml 文件用於配置集羣節點等相關信息的,elasticsearch 文件則是配置服務本身相關的配置,例如某個配置文件的路徑以及java的一些路徑配置什麼的。
[root@master-node ~]# grep '^[a-Z]' /etc/elasticsearch/elasticsearch.yml
cluster.name: master-node # 集羣中的名稱
node.name: master # 該節點名稱
node.master: true # 意思是該節點爲主節點
node.data false # 表示這不是數據節點
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0 # 監聽全部ip,在實際環境中應設置爲一個安全的ip
http.port: 9200 # es服務的端口號
bootstrap.mlockall: true #不使用swap分區,鎖住內存
discovery.zen.ping.unicast.hosts: ["192.168.1.17", "192.168.1.18","192.168.1.21"] # 配置自動發現
[root@master-node ~]#
然後將配置文件發送到另外兩臺機器上去,並修改以下配置:
[root@master-node ~]# scp -pr /etc/elasticsearch/elasticsearch.yml data-node1:/etc/elasticsearch/elasticsearch.yml
[root@master-node ~]# scp -pr /etc/elasticsearch/elasticsearch.yml data-node2:/etc/elasticsearch/elasticsearch.yml
[root@lb-node2 ~]# vim /etc/elasticsearch/elasticsearch.yml
node.name: lb-node2
node.master: false
node.data: true
path.data: /data/es-data #數據存放路徑
配置完成後,回到到主節點上,啓動es服務。9300端口是集羣通信用的,9200則是數據傳輸時用的:
[root@master-node ~]# systemctl start elasticsearch.service
[root@master-node ~]# netstat -lntp|grep java
tcp6 0 0 :::9200 :::* LISTEN 50034/java
tcp6 0 0 :::9300 :::* LISTEN 50034/java
[root@master-node ~]#
## 如果啓動有問題請查看日誌
[root@master-node ~]# ls /var/log/elasticsearch/
[root@master-node ~]# tail -n50 /var/log/messages
主節點啓動完成之後,再啓動其他節點的es服務。
curl查看es集羣情況
[root@master-node ~]# curl '192.168.1.17:9200/_cluster/health?pretty'
{
"cluster_name" : "master-node",
"status" : "green", # 爲green則代表健康沒問題,如果是yellow或者red則是集羣有問題
"timed_out" : false, # 是否有超時
"number_of_nodes" : 3, # 集羣中的節點數量
"number_of_data_nodes" : 2, # 集羣中data節點的數量
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
[root@master-node ~]#
查看集羣的詳細信息:
[root@master-node ~]# curl '192.168.1.17:9200/_cluster/state?pretty'
{
"cluster_name" : "master-node",
"compressed_size_in_bytes" : 346,
"version" : 6,
"state_uuid" : "xgMwKKfxTWmpXWC-0RCYLw",
"master_node" : "ojBw2Bu7SQqfZ4GjSQ8z1A",
"blocks" : { },
"nodes" : {
"yuNZNzj5SPu9UOAf3xcapg" : {
"name" : "lb-node2",
"ephemeral_id" : "W7tc0A-BRfONEuY5VGrlFQ",
"transport_address" : "192.168.1.11:9300",
"attributes" : { }
},
....
....
[root@master-node ~]#
檢查沒有問題後,我們的es集羣就搭建完成了。
啓動報錯:
tail -n100 /var/log/message
- main ERROR Could not register mbeans java.security.AccessControlException: access denied (“javax.management.MBeanTrustPermission” “register”)
改變elasticsearch文件夾所有者到當前用戶
chown -R elasearch.elasticsearch /etc/elasticsearch
2. 搭建kibana和logstash服務器
elasticsearch顯示出來的也是一堆字符串,我們希望這些信息能以圖形化的方式顯示出來,那就需要安裝kibana來爲我們展示這些數據了。
master上安裝kibana
[root@master-node ~]# wget https://mirrors.tuna.tsinghua.edu.cn/elasticstack/6.x/yum/6.0.0/kibana-6.0.0-x86_64.rpm
[root@master-node ~]# rpm -ivh kibana-6.0.0-x86_64.rpm
安裝完成後,對kibana進行配置:
[root@master-node ~]# ll /etc/kibana/
total 8
-rw-r--r-- 1 root root 4649 Nov 11 2017 kibana.yml
[root@master-node ~]#
[root@master-node ~]# grep '^[a-Z]' /etc/kibana/kibana.yml
server.port: 5601 # 配置kibana的端口
server.host: 192.168.1.17 # 配置監聽ip
elasticsearch.url: "http://192.168.1.17:9200" # 配置es服務器的ip,如果是集羣則配置該集羣中主節點的ip
logging.dest: /var/log/kibana/kibana.log # 配置kibana的日誌文件路徑,不然默認是messages裏記錄日誌
[root@master-node ~]#
# 創建日誌文件
[root@master-node ~]# mkdir -p /var/log/kibana
[root@master-node ~]# touch /var/log/kibana/kibana.log
[root@master-node ~]# chmod 777 /var/log/kibana/kibana.log
# 啓動kibana
[root@master-node ~]# systemctl restart kibana
[root@master-node ~]# netstat -lntp|grep 5601
tcp 0 0 192.168.1.17:5601 0.0.0.0:* LISTEN 51762/node
[root@master-node ~]#
注:由於kibana是使用node.js開發的,所以進程名稱爲node
然後在瀏覽器裏進行訪問測試
如:http://192.168…17:5601/ ,由於我們並沒有安裝x-pack,所以此時是沒有用戶名和密碼的,可以直接訪問的:
到此我們的kibana就安裝完成了
在數據節點上安裝logstash,並測試收集系統日誌(實踐Rsyslog)
目前logstash不支持JDK1.9
- 安裝logstash
[root@lb-node1 ~]# wget https://mirrors.tuna.tsinghua.edu.cn/elasticstack/6.x/yum/6.0.0/logstash-6.0.0.rpm
[root@lb-node1 ~]# rpm -ivh logstash-6.0.0.rpm
用戶組進行授權啓動
[root@lb-node1 ~]# groupadd elsearch
[root@lb-node1 ~]# useradd elsearch -g elsearch -p elsearch
[root@lb-node1 ~]# chown -R elsearch:elsearch /etc/elasticsearch
- 先不要啓動服務,先配置logstash收集syslog日誌:
[root@lb-node1 ~]# vim /etc/logstash/conf.d/syslog.conf
input {
syslog {
type => "system-syslog"
port => 10514
}
}
output {
stdout {
codec => rubydebug
}
}
[root@lb-node1 /usr/share/logstash/bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
[root@lb-node1 /usr/share/logstash/bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK # 爲OK則代表配置文件沒有問題
[root@lb-node1 /usr/share/logstash/bin]#
命令說明:
- –path.settings 用於指定logstash的配置文件所在的目錄
- -f 指定需要被檢測的配置文件的路徑
- –config.test_and_exit 指定檢測完之後就退出,不然就會直接啓動了
報錯解決:在虛擬機的設置中,將處理器的處理器核心數量改成2,重新執行啓動命令後,能夠正常運行。若還是未能執行成功,可進一步將處理器數量也改成2
- 配置kibana服務器的ip以及配置的監聽端口:
[root@lb-node1 ~]# vim /etc/rsyslog.conf
#### RULES ####
*.* @@192.168.1.10:10514
- 重啓rsyslog
[root@lb-node1 ~]# systemctl restart rsyslog.service
指定配置文件,啓動logstash:
[root@lb-node1 /usr/share/logstash/bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
# 這時終端會停留在這裏,因爲我們在配置文件中定義的是將信息輸出到當前終端
- 打開新終端檢查一下10514端口是否已被監聽:
[root@lb-node1 ~]# netstat -lntp |grep 10514
tcp6 0 0 :::10514 :::* LISTEN 10234/java
[root@lb-node1 ~]#
然後在別的機器ssh登錄到這臺機器上,測試一下有沒有日誌輸出:
root@lb-node1 /usr/share/logstash/bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
{
"severity" => 6,
"pid" => "10460",
"program" => "sshd",
"message" => "Accepted password for root from 192.168.1.11 port 42848 ssh2\n",
"type" => "system-syslog",
"priority" => 86,
"logsource" => "lb-node1",
"@timestamp" => 2019-09-06T15:42:50.000Z,
"@version" => "1",
"host" => "192.168.1.10",
"facility" => 10,
"severity_label" => "Informational",
"timestamp" => "Sep 6 11:42:50",
"facility_label" => "security/authorization"
}
.......
可以看到,終端中以JSON的格式打印了收集到的日誌,測試成功。
- 配置
logstash
修改測試的配置,這一步我們需要重新改一下配置文件,讓收集的日誌信息輸出到es服務器中,而不是當前終端:
[root@lb-node1 ~]# vim /etc/logstash/conf.d/syslog.conf
input {
syslog {
type => "system-syslog"
port => 10514
}
}
output {
elasticsearch {
hosts => ["192.168.1.17:9200"]
index => "system-syslog-%{+YYYY.MM}"
}
}
## 同樣的需要檢測配置文件有沒有錯:
[root@lb-node1 /usr/share/logstash/bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK
- 沒問題後,啓動logstash服務,並檢查進程以及監聽端口:
[root@lb-node1 ~]# systemctl status logstash.service
[root@lb-node1 ~]# systemctl status logstash.service
● logstash.service - logstash
Loaded: loaded (/etc/systemd/system/logstash.service; disabled; vendor preset: disabled)
Active: active (running) since Fri 2019-09-06 11:55:01 EDT; 4s ago
Main PID: 11104 (java)
CGroup: /system.slice/logstash.service
└─11104 /bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancy...
Sep 06 11:55:01 lb-node1 systemd[1]: Started logstash.
Sep 06 11:55:01 lb-node1 systemd[1]: Starting logstash...
[root@lb-node1 ~]# netstat -aux|grep logstash
# 進程正常,但是9600以及10514端口卻沒有被監聽
問題解決:查看logstash的日誌看看有沒有錯誤信息的輸出,但是發現沒有記錄日誌信息,那就只能轉而去查看tail -n50 /var/log/messages
的日誌,發現錯誤信息如下:
這是因爲權限不夠,既然是權限不夠,那就設置權限即可:
[root@lb-node1 ~]# chmod logstash /var/log/logstash/logstash-plain.log
chmod: invalid mode: ‘logstash’
Try 'chmod --help' for more information.
[root@lb-node1 ~]# chown logstash /var/log/logstash/logstash-plain.log
[root@lb-node1 ~]# ll /var/log/logstash/logstash-plain.log
-rw-r--r-- 1 logstash root 10505 Sep 6 11:47 /var/log/logstash/logstash-plain.log
[root@lb-node1 ~]# ll !$
ll /var/log/logstash/logstash-plain.log
-rw-r--r-- 1 logstash root 11865 Sep 6 12:03 /var/log/logstash/logstash-plain.log
[root@lb-node1 ~]#
設置完權限重啓服務之後,發現還是沒有監聽端口,查看logstash-plain.log文件記錄的錯誤日誌信息如下:必須是可寫目錄。它不可寫
依舊是權限的問題,這是因爲之前我們以root的身份在終端啓動過logstash,所以產生的相關文件的屬組屬主都是root
[root@lb-node1 ~]# chown -R logstash /var/lib/logstash
[root@lb-node1 ~]# ll !$
ll /var/lib/logstash
total 4
drwxr-xr-x 2 logstash root 6 Sep 6 11:02 dead_letter_queue
drwxr-xr-x 2 logstash root 6 Sep 6 11:02 queue
-rw-r--r-- 1 logstash root 36 Sep 6 11:35 uuid
## 端口正常監聽了,這樣我們的logstash服務就啓動成功了
[root@lb-node1 ~]# netstat -lntp|grep 9600
tcp6 0 0 127.0.0.1:9600 :::* LISTEN 15414/java
[root@lb-node1 ~]# netstat -lntp|grep 10514
tcp6 0 0 :::10514 :::* LISTEN 15414/java
[root@lb-node1 ~]#
## 但是可以看到,logstash的監聽ip是127.0.0.1這個本地ip,本地ip無法遠程通信,所以需要修改一下配置文件,配置一下監聽的ip:
[root@lb-node1 ~]# vim /etc/logstash/logstash.yml
...
http.host: "192.168.1.10"
[root@lb-node1 ~]# systemctl restart logstash
[root@lb-node1 ~]# netstat -lntp|grep 9600
tcp6 0 0 192.168.1.10:9600 :::* LISTEN 15414/java
[root@lb-node1 ~]#
kibana上查看日誌
回到kibana服務器上查看日誌,執行以下命令可以獲取索引信息:
[root@master-node ~]# curl '192.168.1.17:9200/_cat/indices?v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open system-syslog-2019.09 U4F6hzzYRLuzGQkq15jmIA 5 1 28 0 429.5kb 214.7kb
green open .kibana ol9lN_JkQaiNSI11KWUkAA 1 1 1 0 7.3kb 3.6kb
[root@master-node ~]#
## 如上,可以看到,在logstash配置文件中定義的system-syslog索引成功獲取到了,證明配置沒問題,logstash與es通信正常。
獲取指定索引詳細信息:
[root@master-node ~]# curl -XGET '192.168.1.17:9200/system-syslog-2019.09?pretty'
如果日後需要刪除索引的話,使用以下命令可以刪除指定索引:
curl -XDELETE 'localhost:9200/system-syslog-20189.09'
es與logstash能夠正常通信後就可以去配置kibana了,瀏覽器訪問192.168.1.17:5601,到kibana頁面上配置索引:
- 我們也可以使用通配符,進行批量匹配:
- 如果es服務器正常返回信息,但是 “Discover” 頁面卻依舊顯示無法查找到日誌信息的話,就使用另一種方式,進入設置刪除掉索引:
- 重新添加索引,但是這次不要選擇 @timestampe但是這種方式只能看到數據,沒有可視化的柱狀圖了:
以上這就是如何使用logstash收集系統日誌,輸出到es服務器上,並在kibana的頁面上進行查看
logstash收集nginx日誌實戰
和收集syslog一樣,首先需要編輯配置文件,這一步在logstash服務器上完成
[root@lb-node1 ~]# vim /etc/logstash/conf.d/nginx.conf
input {
file {
path => "/tmp/elk_access.log"
start_position => "beginning" #設定改成 "beginning",logstash 進程就從頭開始讀取,有點類似 cat,但是讀到最後一行不會終止,而是繼續變成 tail -F
type => "nginx"
}
}
filter {
grok {
match => { "message" => "%{IPORHOST:http_host} %{IPORHOST:clientip} - %{USERNAME:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:http_verb} %{NOTSPACE:http_request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" %{NUMBER:response} (?:%{NUMBER:bytes_read}|-) %{QS:referrer} %{QS:agent} %{QS:xforwardedfor} %{NUMBER:request_time:float}"}
}
geoip {
source => "clientip"
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
hosts => ["192.168.1.17:9200"]
index => "nginx-test-%{+YYYY.MM.dd}"
}
}
檢測配置文件是否有錯
[root@lb-node1 /usr/share/logstash/bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/nginx.conf --config.test_and_exit
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK
[root@lb-node1 /usr/share/logstash/bin]#
進入nginx虛擬主機配置文件所在的目錄中,新建一個虛擬主機配置文件:
[root@master-node conf.d]# vim elk.conf
server {
listen 80;
server_name 192.168.1.17;
location / {
proxy_pass http://192.168.1.17:5601;
#proxy_set_header Host $host;
#proxy_set_header X-Real-IP $remote_addr;
#proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
access_log /tmp/elk_access.log main2;
}
## 配置nginx的主配置文件,因爲需要配置日誌格式,在 log_format 那一行的下面增加以下內容:
log_format main2 '$http_host $remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$upstream_addr" $request_time';
## 重啓nginx,訪問日誌生成了
[root@master-node conf.d]# ll /tmp/elk_access.log
-rw-r--r-- 1 root root 51095 Sep 7 01:46 /tmp/elk_access.log
[root@master-node conf.d]#
重啓logstash服務,生成nginx日誌的索引:
systemctl restart logstash
[root@master-node ~]# curl '192.168.1.17:9200/_cat/indices?v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open system-syslog-2019.09 U4F6hzzYRLuzGQkq15jmIA 5 1 5262 0 2.6mb 1.1mb
green open nginx-test-2019.09.06 -jVW9nG3RQC3RmMS0nYc6g 5 1 47 0 611.3kb 313.5kb
green open .kibana ol9lN_JkQaiNSI11KWUkAA 1 1 4 0 43.4kb 23.3kb
[root@master-node ~]#
那麼這時就可以到kibana上配置該索引
使用beats採集日誌
beats是ELK體系中新增的一個工具,它屬於一個輕量的日誌採集器,以上我們使用的日誌採集工具是logstash,但是logstash佔用的資源比較大,沒有beats輕量,所以官方也推薦使用beats來作爲日誌採集工具。而且beats可擴展,支持自定義構建。
wget https://mirrors.tuna.tsinghua.edu.cn/elasticstack/6.x/yum/6.0.0/filebeat-6.0.0-x86_64.rpm
rpm -ivh filebeat-6.0.0-x86_64.rpm
安裝完成之後編輯配置文件:
[root@lb-node2 /]# vim /etc/filebeat/filebeat.yml
- type: log
#enabled: false 這一句要註釋掉
paths:
- /var/log/messages # 指定需要收集的日誌文件的路徑
#output.elasticsearch: # 先將這幾句註釋掉
# Array of hosts to connect to.
# hosts: ["localhost:9200"]
output.console: # 指定在終端上輸出日誌信息(用來測試filebeat能否正常收集日誌數據)
enable: true
測試可以正常收集日誌數據,再次修改配置文件,將filebeat作爲一個服務啓動:
[root@lb-node2 /]# vim /etc/filebeat/filebeat.yml
#output.console: # 關閉在控制檯輸出
# enable: true
# 把上面這兩句的註釋去掉
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["192.168.77.128:9200"] # 並配置es服務器的ip地址
- 啓動filebeat服務
[root@lb-node2 /]# systemctl start filebeat.service
[root@lb-node2 /]# ps -ef |grep filebeat|grep -v grep
root 10654 1 0 21:04 ? 00:00:00 /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat
[root@lb-node2 /]#
- 啓動成功後可以在elasticsearch上新增了一個以filebeat-6.0.0開頭的索引,代表filesbeat和es能夠正常通信了
- 到kibana上配置該索引