前言:
背景:
Logstash: logstash server端用來蒐集日誌;
Elasticsearch: 存儲各類日誌;
Kibana: web化接口用作查尋和可視化日誌;
1.安裝JDK
1.1下載地址
下載完成後,在CentOS服務器上解壓縮:tar xvzf jdk-8u131-linux-x64.tar.gz -C /usr/lib/jvm/
1.2配置環境變量
JAVA_HOME=/usr/lib/jvm/jdk1.8.0_131
JRE_HOME=$JAVA_HOME/jre
PATH=$PATH:$JAVA_HOME/bin:/sbin:/usr/bin:/usr/sbin
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JAVA_HOME
export JRE_HOME
export PATH
export CLASSPATH
JAVA_HOME=/usr/lib/jvm/jdk1.8.0_131
JRE_HOME=$JAVA_HOME/jre
PATH=$PATH:$JAVA_HOME/bin:/sbin:/usr/bin:/usr/sbin
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JAVA_HOME
export JRE_HOME
export PATH
export CLASSPATH
執行:source
/etc/profile,使profile文件生效。執行java -version和java,有相應數據則安裝成功。2.安裝ELK
2.1下載ELK
考慮到不是所有的服務器都具備網絡環境,筆者這裏自行下載。地址https://www.elastic.co/downloads/
下載完成之後,FTP上傳到服務器上,筆者放到/elk文件夾下,依次解壓縮,如圖所示,由上到下分別爲解壓縮後的elasticsearch、kibana、logstash和3者的安裝包。。
2.2啓動elasticsearch
2.2.1修改elasticsearch.yml內容
path.data: /elk/elasticsearch/data
path.logs: /elk/elasticsearch/logs
network.host: 10.8.120.37
http.port: 9200
http.cors.enabled: true
http.cors.allow-origin: "*"
2.2.2新建用戶
useradd elk
passwd elk 12345679
chown -R elk:12345679 /elk
2.2.3進入普通用戶啓動elasticsearch
su elk
cd /elk/elasticsearch
./bin/elasticsearch -d
方法1:root身份下->sudo sysctl -w vm.max_map_count=262144,sysctl -a|grep vm.max_map_count,可以臨時修改
方法1:root身份下->vi /etc/security/limits.conf,* soft nofile 65536,* hard nofile 131072,* soft nproc 2048,* hard nproc 4096
2.3啓動logstash
input {
stdin{
}
}
output {
elasticsearch {
hosts => "10.8.120.37:9200"
index => "logstash-%{+YYYY.MM.dd}"
}
啓動logstash,-e:意即執行,--config或-f:意即文件執行。cd /elk/logstash
./bin/logstash -f config/screen_to_es.conf
之後在屏幕上輸入的內容將在kibana界面展示出來,具體見下文Kibana使用教程。
2.3.1 Logstash聯動Kafka
input{
stdin{}
}
output{
kafka{
topic_id => "test"
bootstrap_servers => "10.8.120.25:9092" # kafka的地址
batch_size => 5
}
stdout{
codec => rubydebug
}
}
cd /elk/logstash
./bin/logstash -f config/logstash_to_kafka.conf
利用logstash從kafka讀數據。建立kafka_to_logstash.conf,內容如下
input{
kafka {
codec => "plain"
group_id => "logstash"
auto_offset_reset => "smallest"
reset_beginning => true
topic_id => "test"
zk_connect => "10.8.120.25:2181" # zookeeper的地址
}
}
output{
elasticsearch {
hosts => "10.8.120.37:9200"
index => "logstash-%{+YYYY.MM.dd}"
}
}
cd /elk/logstash
./bin/logstash -f config/kafka_to_logstash.conf
2.3.2 Logstash聯動Kafka
這裏有個坑,這裏有個坑,這裏有個坑,筆者跳的很深!input{
kafka {
bootstrap_servers => ["10.8.120.25:9092"]
topics => ["test"]
group_id => "logstash"
codec => "json"
consumer_threads =>2
decorate_events => true
}
}
output {
elasticsearch {
hosts => ["10.8.120.37:9200"]
index => "logstash-%{+YYYY.MM.dd}"
}
}
logstash-2.3.4版本配置
input{
kafka {
codec => "plain"
group_id => "logstash"
auto_offset_reset => "smallest"
reset_beginning => true
topic_id => "test"
zk_connect => "10.8.120.25:2181" # zookeeper的地址
}
}
output{
elasticsearch {
hosts => "10.8.120.37:9200"
index => "logstash-%{+YYYY.MM.dd}"
}
}
./bin/logstash -f config
2.4啓動kibana
server.port: 5601
server.host: "10.8.120.37"
kibana.index: ".kibana"
elasticsearch.url: "http://10.8.120.37:9200"
cd /elk/kibana
./bin/kibana