ELK7.4.0單節點部署
環境準備
- 安裝系統,數據盤設置爲/srv
內核優化參考
我們需要創建elk專用的賬號,並創建所需要的目錄並授權
useradd elk;
mkdir /srv/{app,data,logs}/elk
chown -Rf elk:elk /srv/{app,data,logs}/elk
- 修改
/etc/security/limits.conf
* soft nofile 65536
* hard nofile 65536
* soft nproc 65536
* hard nproc 65536
elk soft nofile 65536
elk hard nofile 65536
elk soft nproc 65536
elk hard nproc 65536
安裝elk過程所有操作必須使用elk賬戶進行!
su - elk
Elasticsearch
這次先用的是單節點的ES,沒有部署集羣,集羣部署的後續會更新
首先我們需要從官網下載最新的es安裝包,這裏建議使用tar包安裝;
cd /srv/app/elk;
wget http://172.19.30.116/mirror/elk/elasticsearch/7.4.0/elasticsearch-7.4.0-linux-x86_64.tar.gz
tar -zxvf elasticsearch-7.4.0-linux-x86_64.tar.gz
mv elasticsearch-7.4.0-linux-x86_64.tar.gz elasticsearch
- 修改es的配置文件
/srv/app/elk/elasticsearch/config/elasticsearch.yml
cluster.name: es-cluster
node.name: es-1
node.master: true #允許爲master節點
node.data: true #允許爲數據節點
path.data: /srv/data/elk/elasticsearch #設置數據目錄
path.logs: /srv/logs/elk/elasticsearch #設置日誌目錄
network.host: 127.0.0.1 #僅允許本地訪問,如要其它網段訪問,可以設置爲網段地址,也可以直接寫成0.0.0.0
http.port: 9200 #http端口,默認爲9200
http.cors.enabled: true
http.cors.allow-origin: "*"
xpack.security.enabled: false
- JVM內存我給了4G,查詢的量也比較小,小一點沒關係,根據使用情況來看,es7.4.0版本比較耗費內存
-Xms4g
-Xmx4g
- 啓動es, 生產環境不建議如下這種方式啓動,建議使用supervisord服務啓動,建議參考: https://www.cnblogs.com/lizhaojun-ops/p/11962485.html
/srv/app/elk/elasticsearch/bin/elasticsearch -d
Kibana
cd /srv/app/elk;
wget http://172.19.30.116/mirror/elk/kibana/7.4.0/kibana-7.4.0-linux-x86_64.tar.gz
tar -zxvf kibana-7.4.0-linux-x86_64.tar.gz
mv kibana-7.4.0-linux-x86_64 kibana
- 修改kafka的配置文件 ``
server.port: 5601
server.host: "localhost" #也可以直接寫成0.0.0.0
server.name: "kibana"
elasticsearch.hosts: ["http://127.0.0.1:9200"]
i18n.locale: "en" #如果要開啓中文可以改成zh-CN
- 啓動kibana,生產環境不建議如下這種方式啓動,建議使用supervisord服務啓動,建議參考: https://www.cnblogs.com/lizhaojun-ops/p/11962485.html
/srv/app/elk/kibana/bin/kibana
logstash
cd /srv/app/elk;
wget http://172.19.30.116/mirror/elk/logstash/7.4.0/logstash-7.4.0.tar.gz
tar -zxvf logstash-7.4.0.tar.gz
mv logstash-7.4.0 logstash
- 根據實際情況修改jvm
/srv/app/elk/logstash/config/jvm.options
; 默認1G,如果日誌的數量比較大,可以改成2G或者更多
-Xms1g
-Xmx1g
至此,ELK集羣已經部署完成了,現在我們需要準備我們的Redis和filebeat了,redis用來做日誌的暫存隊列,filebeat收集nginx或者其他應用的日誌
REDIS
yum install epel-release -y
yum install redis* -y
chkconfig redis on
service redis start
Filebeat
在nginx節點上安裝filebeat,修改nginx的log_format,新增nginxjson,並讓日誌引用這個格式的日誌,可以參考這篇博客:
log_format nginxjson '{"@timestamp":"$time_iso8601",'
'"host":"$server_addr",'
'"service":"nginx",'
'"trace":"$upstream_http_ctx_transaction_id",'
'"clientip":"$remote_addr",'
'"remote_user":"$remote_user",'
'"request":"$request",'
'"url":"$scheme://$http_host$request_uri",'
'"http_user_agent":"$http_user_agent",'
'"server_protocol":"$server_protocol",'
'"size":$body_bytes_sent,'
'"responsetime":$request_time,'
'"upstreamtime":"$upstream_response_time",'
'"upstreamhost":"$upstream_addr",'
'"http_host":"$host",'
'"domain":"$host",'
'"xff":"$http_x_forwarded_for",'
'"x_clientOs":"$http_x_clientOs",'
'"x_access_token":"$http_x_access_token",'
'"referer":"$http_referer",'
'"status":"$status"}';
rpm -ivh http://172.19.30.116/mirror/elk/filebeat/7.4.0/filebeat-7.4.0-x86_64.rpm
chkconfig filebeat on
修改filebeat的配置/etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/nginx_access.log
tags: ["nginx-access"]
document_type: json-nginxaccess
tail_files: true
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
output.redis:
enabled: true
hosts: ["192.168.1.1:7000"] #這裏的是自定義的REDIS服務器IP,redis端口默認是6379,請根據自己的情況修改
port: 7000
key: nginx
db: 0
datatype: list
現在我們反過來配置logstash
mkdir /srv/app/elk/logstash/config/conf.d
vim /srv/app/elk/logstash/config/conf.d/nginx-logs.conf
寫入以下內容
input {
redis {
host => "192.168.1.1"
port => "7000"
key => "nginx"
data_type => "list"
threads => "5"
db => "0"
}
}
filter {
json {
source => "message"
remove_field => ["beat"]
}
geoip {
source => "clientip"
}
geoip {
source => "clientip"
target => "geoip"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
grok {
match => ["message","%{TIMESTAMP_ISO8601:isotime}"]
}
date {
locale => "en"
match => ["isotime","ISO8601"]
target => "@timestamp"
}
mutate {
convert => [ "[geoip][coordinates]", "float"]
# remove_field => ["message"]
}
}
output {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "logstash-nginx-logs-%{+YYYY.MM.dd}"
}
}
- 啓動logstash,建議使用supervisord服務啓動,建議參考: https://www.cnblogs.com/lizhaojun-ops/p/11962485.html
/srv/app/elk/logstash/bin/logstash -f /srv/app/elk/logstash/config/conf.d/nginx-logs.conf
後記
- Nginx代理kibana訪問,方便添加http認證
主要的配置如下:
server {
listen 80;
server_name kibana;
access_log off;
error_log off;
location / {
auth_basic "Kibana";
auth_basic_user_file /srv/app/tengine/conf/conf.d/passwd;
proxy_pass http://127.0.0.1:5601;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
- 添加賬戶的腳本
/srv/app/tengine/conf/conf.d/adduser.sh
#!/bin/bash
read -p "請輸入用戶名: " USERNAME
read -p "請指定用戶密碼: " PASSWD
printf "$USERNAME:$(openssl passwd -crypt $PASSWD)\n" >> passwd