參考:
https://www.cnblogs.com/configure/p/7607302.html (kibana登陸認證[借用nginx轉發])
https://www.elastic.co/cn/products(官網)
https://zhuanlan.zhihu.com/p/23049700(filebeat)
https://www.elastic.co/guide/en/logstash/current/plugins-filters-date.html#plugins-filters-date-match(logstash時間戳轉換)
https://blog.csdn.net/wuyinggui10000/article/details/77879016(elk時區問題)
架構簡介:
利用filebeat收集日誌給logstash,logstash統一格式傳遞給elasticsearch,再利用kibana圖形界面進行展示
系統:全都是centos 6系列版本
1.elasticsearch的安裝配置:
yum install https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.3.0.rpm -y
vi /etc/elasticsearch/elasticsearch.yml
/etc/init.d/elasticsearch start
/etc/elasticsearch/elasticsearch.yml 配置如下:
cluster.name: lesu-elk
node.name: node-1
path.data: /home/elasticsearch #保存數據的地方
path.logs: /var/log/elasticsearch
debug:
1.elasticsearch依賴於java,所以有時候會報java相關錯,此時 yum install java -y即可
2.切換數據目錄是要注意權限,我這裏就遇到權限不足,所以需要 chown -R elasticsearch /home/elasticsearch
測試:
curl 'http://localhost:9200/?pretty' 顯示爲:
{
"name" : "node-1",
"cluster_name" : "lesu-elk",
"cluster_uuid" : "goAOXrJpQLuqfoHzl7LJMg",
"version" : {
"number" : "6.3.0",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "424e937",
"build_date" : "2018-06-11T23:38:03.357887Z",
"build_snapshot" : false,
"lucene_version" : "7.3.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
即表示正常
2.logstash的安裝配置
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
vi /etc/yum.repos.d/logstash.repo
yum install logstash -y
ln -s /usr/local/bin/logstash /usr/share/logstash/bin/logstash
ln -s /etc/logstash /usr/share/logstash/config
#默認logstash可執行文件不在linux PATH變量搜索範圍內,第二條則是logstash找不到自己環境變量的配置文件,所以需要這兩個軟連接
vi /etc/logstash/filebeat.conf
logstash -t /etc/logstash/filebeat.conf #這個是配置文件測試語句
logstash -f /etc/logstash/filebeat.conf #這個是啓動命令
/etc/yum.repos.d/logstash.repo 內容如下
[logstash-6.x]
name=Elastic repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
logstash的調試配置如下:
input {
stdin{
}
}
filter {
grok {
match => { "message" => "%{ATS}" }
}
date {
match => ["time", "yyyy-MM-dd HH:mm:ss,SSS","UNIX"]
target => "time"
}
ruby {
code => "event.set('time', event.get('time').time.localtime + 8*60*60)"
}
}
output {
stdout{
codec => rubydebug
}
}
#主要在於用stdin和stdout兩個插件,用來實時輸入和輸出
最終的線上配置:
input {
beats {
port => "5044"
}
}
filter {
grok {
match => { "message" => "%{ATS}" }
}
date {
match => ["time", "yyyy-MM-dd HH:mm:ss,SSS","UNIX"]
target => "time"
}
ruby {
code => "event.set('time', event.get('time').time.localtime + 8*60*60)"
}
}
output {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
}
}
線上配置簡介:
logstash可以簡單看成三段式的流水線處理,輸入-->數據處理-->輸出
輸入就是input這段,可以來自標準輸入stdin、redis等等,這裏因爲跟filebeat結合,所以用beats監聽5044端口等待來自filebeat的數據
數據處理是filter段,這裏採用grok插件對數據進行分割,我的數據是自定義的squid日誌,數據如下
1500047983.032 494 192.168.124.4 TCP_MISS/200 656 359 http://linzb.com/wx_auth/WechatQrcode/694a37e9c2b7616fd53119fcd7120927/2 - DIRECT/6.6.6.6 image/png
默認的grok-patterns沒有現成的規則可以用,這裏我根據需求只分割image/png的部分前面的部分,所以在/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-patterns-core-4.1.2/patterns/grok-patterns中添加自定義條目:
ATS %{NUMBER:time}\s+%{NUMBER:duration}\s%{IP:client_address}\s%{WORD:cache_result}/%{POSINT:status_code}\s%{NUMBER:bytes}\s%{NUMBER:bytes_source}\s%{NOTSPACE:url}\s-\s%{WORD:hierarchy_code}/%{IP:source_address}
%{NUMBER:time}: NUMBER是grok-patterns文件中定義好的匹配規則變量,這個語句大意可以理解爲,過濾匹配NUMBER規則的數據,保存爲time字段,其他依次類推
\s:這裏代表一個空格
因爲我的數據是UNIX的時間格式(而且NUMBER匹配的是數字,不是時間,所以elasticsearch也沒法識別),在elasticsearch中不直觀,所以我這裏也用了date插件,轉換時間格式
match => ["time", "yyyy-MM-dd HH:mm:ss,SSS","UNIX"] 含義是匹配time字段,UNIX代表原來的數據格式是UNIX時間
target => "time" 則表示修改後的數據存儲爲time字段,這裏相當於修改自身了,也可以保存爲其他字段,然後用remove_field插件移除原有字段
ruby那段的語句則是爲了修改時區,event.set是設置時間字段,event.get是獲取時間字段,具體請參考:https://blog.csdn.net/wuyinggui10000/article/details/77879016
output就很簡單了,輸出數據給elasticsearch,比較關鍵的是%{[@metadata][beat]},它可以獲取來自filebeat傳遞的變量,這樣我們就可以爲不同的節點傳遞不同的索引,用以區分
3.filebeat的安裝配置:
yum install https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.3.0-x86_64.rpm -y
vi /etc/filebeat/filebeat.yml
/etc/init.d/filebeat start
/etc/filebeat/filebeat.yml內容如下:
filebeat.prospectors:
- input_type: log
paths:
- /home/ats_log/squid*
output.logstash:
hosts: ["8.8.8.8:5044"]
index: 192.168.124.127
path:指明要收集哪些日誌
index:這個字段就是跟前面logstash的%{[@metadata][beat]}相對應,傳遞給logstash處理作爲elasticsearch的索引
注:filebeat用/var/lib/filebeat/registry記錄採取日誌的位置,所以要重新讀取日誌的話,就修改這裏面對應日誌的offset字段,或者簡單點刪掉這個文件,不過這時就必須先停掉filebeat
4.kibana的安裝配置
yum install https://artifacts.elastic.co/downloads/kibana/kibana-6.3.0-x86_64.rpm -y
vi /etc/kibana/kibana.yml
/etc/init.d/kibana start
kibana.yml配置很簡單,默認就可以主要字段如下:
server.port: 5601
server.host: "127.0.0.1"
elasticsearch.url: "http://localhost:9200"
5.爲kibana添加認證功能(這裏爲了防止鏈接失效,所以直接摘抄自:https://www.cnblogs.com/configure/p/7607302.html)
vi /etc/yum.repos.d/nginx.repo
yum -y install nginx httpd-tools
mkdir -p /etc/nginx/passwd
htpasswd -c -b /etc/nginx/passwd/kibana.passwd user ******
cp /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf.backup
vim /etc/nginx/conf.d/default.conf
service nginx restart
/etc/yum.repos.d/nginx.repo 內容如下:
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/6/$basearch/
gpgcheck=0
enabled=1
/etc/nginx/conf.d/default.conf 內容如下:
server {
listen 80;
auth_basic "Kibana Auth";
auth_basic_user_file /etc/nginx/passwd/kibana.passwd;
location / {
proxy_pass http://127.0.0.1:5601;
proxy_redirect off;
}
}