前言
小編的上一篇文章中,詳細介紹了ELK的架構、優勢與kibana + elasticsearch的部署步驟。
廢話不多說,直接上乾貨——ELK中的logstash、kafka與filebeat的部署。
實戰
搭建環境與各主機角色說明。
機器選擇:睿江雲平臺
節點選擇:廣東G(VPC網絡更安全、SSD磁盤性能高)
雲主機配置:4核16G(4核8G也支持,但會有延時感)
網絡選擇:VPC虛擬私有云(VPC網絡更安全、高效)
帶寬:5M
系統版本:Centos7.6
雲主機數量:5
軟件版本:ELK 7.4.0、kafka 2.12-2.6.0
首先,下面爲部署logstash的步驟。
a.步驟1
登陸logstash節點。ssh 到 192.168.0.6
b.步驟2
cd /opt/
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.4.0.tar.gz
c.步驟3
tar -zxvf logstash-7.4.0.tar.gz
mkdir -p /opt/els/logs/logs
d.步驟4
vi /opt/logstash-7.4.0/config/logstash.yml
path.logs: /opt/els/logs/logs
path.config: /opt/logstash-7.4.0/conf.d/*.conf
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: "elastic"
xpack.monitoring.elasticsearch.password: "123456"
xpack.monitoring.elasticsearch.hosts: ["http://ES-node1:9200","http://ES-node2:9201","http://ES-node3:9202"]
e.步驟5
新建配置文件,配置文件作用爲自定義日誌格式。
vi /opt/logstash-7.4.0/config/demo.yml
input {
beats {
port => 5044
type => syslog
}
}
filter{
grok{
match => [ "message","%{TIMESTAMP_ISO8601:timestamp}" ]
}
date {
match => [ "timestamp", "yyyy-MM-dd HH:mm:ss" ]
locale => "en"
timezone => "+00:00"
target => "@timestamp"
}
mutate{
remove_field => ["host"]
remove_field => ["agent"]
remove_field => ["ecs"]
remove_field => ["tags"]
remove_field => ["fields"]
remove_field => ["@version"]
remove_field => ["input"]
remove_field => ["log"]
}
}
output {
elasticsearch {
hosts => ["http://ES-node1:9200"]
user => "elastic"
password => "123456"
index => "test-log"
}
}
f.步驟6
chown -R els:els /opt/logstash-7.4.0
g.步驟7
cd /opt/logstash-7.4.0/bin/
nohup ./logstash &
logstash啓動成功
部署完logstash,接下來部署kafka,作爲日誌的中轉、存儲。以下爲部署kafka的步驟。(親測可用)
a.步驟1
登陸kibana主節點、副節點、副節點。ssh 到 192.168.0.3、192.168.0.4、192.168.0.5
b.步驟2
vi /etc/hosts
192.168.0.3 kafka
192.168.0.4 ES-node1
192.168.0.5 ES-node2
192.168.0.6 ES-master
c.步驟3
cd /opt/
wget https://mirror.bit.edu.cn/apache/kafka/2.6.0/kafka_2.12-2.6.0.tgz
tai -zxvf /kafka_2.12-2.6.0.tgz
d.步驟4
192.168.0.3配置如下:
vi /opt/kafka_2.12-2.6.0/config/zookeeper.properties
dataDir=/opt/kafka_2.12-2.6.0/zookeeper-data/data/
dataLogDir=/opt/kafka_2.12-2.6.0/zookeeper-data/logs
clientPort=2181
maxClientCnxns=0
initLimit=10
syncLimit=5
server.1=192.168.0.3:2888:3888
server.2=192.168.0.4:2888:3888
server.3=192.168.0.5:2888:3888
創建目錄
mkdir -p /opt/kafka_2.12-2.6.0/zookeeper-data/data/
mkdir -p /opt/kafka_2.12-2.6.0/zookeeper-data/logs
192.168.0.4配置如下:
dataDir=/opt/kafka_2.12-2.6.0/zookeeper-data/data/
dataLogDir=/opt/kafka_2.12-2.6.0/zookeeper-data/logs
clientPort=2181
admin.enableServer=false
maxClientCnxns=0
initLimit=10
syncLimit=5
server.1=192.168.0.3:2888:3888
server.2=192.168.0.4:2888:3888
server.3=192.168.0.5:2888:3888
創建目錄
mkdir -p /opt/kafka_2.12-2.6.0/zookeeper-data/data/
mkdir -p /opt/kafka_2.12-2.6.0/zookeeper-data/logs
192.168.0.5配置如下:
dataDir=/opt/kafka_2.12-2.6.0/zookeeper-data/data/
dataLogDir=/opt/kafka_2.12-2.6.0/zookeeper-data/logs
clientPort=2181
admin.enableServer=false
maxClientCnxns=0
initLimit=10
syncLimit=5
server.1=192.168.0.3:2888:3888
server.2=192.168.0.4:2888:3888
server.3=192.168.0.5:2888:3888
創建目錄
mkdir -p /opt/kafka_2.12-2.6.0/zookeeper-data/data/
mkdir -p /opt/kafka_2.12-2.6.0/zookeeper-data/logs
需要修改的地方爲ip地址不同
e.步驟5
192.168.0.3操作如下:
cd /opt/kafka_2.12-2.6.0/zookeeper-data/data/
echo “1” > myid
192.168.0.4操作如下:
cd /opt/kafka_2.12-2.6.0/zookeeper-data/data/
echo “2” > myid
192.168.0.5操作如下:
cd /opt/kafka_2.12-2.6.0/zookeeper-data/data/
echo “3” > myid
f.步驟6
192.168.0.3操作如下:
cd /opt/kafka_2.12-2.6.0/
nohup bin/zookeeper-server-start.sh config/zookeeper.properties &
192.168.0.43操作如下:
cd /opt/kafka_2.12-2.6.0/
nohup bin/zookeeper-server-start.sh config/zookeeper.properties &
192.168.0.5操作如下:
cd /opt/kafka_2.12-2.6.0/
nohup bin/zookeeper-server-start.sh config/zookeeper.properties &
g.步驟7
192.168.0.3操作如下:
vi /opt/kafka_2.12-2.6.0/config/server.properties
broker.id=1
listeners=PLAINTEXT://192.168.0.3:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/opt/kafka_2.12-2.6.0/kafka-data/logs
num.partitions=9
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.0.3:2181,192.168.0.4:2181,192.168.0.5:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
192.168.0.4操作如下:
broker.id=2
listeners=PLAINTEXT://192.168.0.4:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/opt/kafka_2.12-2.6.0/kafka-data/logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.0.3:2181,192.168.0.4:2181,192.168.0.5:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
192.168.0.5操作如下:
broker.id=3
listeners=PLAINTEXT://192.168.0.5:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/opt/kafka_2.12-2.6.0/kafka-data/logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.0.3:2181,192.168.0.4:2181,192.168.0.5:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
主要修改地方爲listeners與zookeeper.connect的IP
h.步驟8
192.168.0.3操作如下:
cd /opt/kafka_2.12-2.6.0/
nohup bin/kafka-server-start.sh config/server.properties &
192.168.0.4操作如下:
cd /opt/kafka_2.12-2.6.0/
nohup bin/kafka-server-start.sh config/server.properties &
192.168.0.4操作如下:
cd /opt/kafka_2.12-2.6.0/
nohup bin/kafka-server-start.sh config/server.properties &
i.步驟9
192.168.0.3操作如下:
bin/kafka-topics.sh —create —zookeeper node01:2181,node02:2181,node03:2181 —replication-factor 2 —partitions 3 —topic test
創建成功即測試啓動完畢
最後將部署filebeat到對應的主機上進行日誌檢控與收集。以下爲部署filebeat的步驟。(親測可用)
a.步驟1
登陸filebeat。ssh 到 192.168.0.2
b.步驟2
cd /opt/
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.4.0-linux-x86_64.tar.gz
c.步驟3
tai -zxvf ./filebeat-7.4.0-linux-x86_64.tar.gz
d.步驟4
192.168.0.2配置如下:
vi /opt/ filebeat-7.4.0-linux-x86_64/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/httpd/access_log
tags: ["C7-httpd-access_log"]
- type: log
enabled: true
paths:
- /var/log/httpd/error_log
tags: ["C7-httpd-error_log"]
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
output.kafka:
enabled: true
hosts: ["192.168.0.3:9092"]
topic: "test"
required_acks: 1
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
e.步驟5
nohup /opt/filebeat-7.4.0-linux-x86_64/filebeat -e -c /opt/filebeat-7.4.0-linux-x86_64/filebeat.yml &
啓動filebeat成功
到目前爲止,企業級ELK項目是完整部署下來了。檢驗下是否有數據輸出到kibana。
打開瀏覽器,登錄http://192.168.0.6:5601/
賬號與密碼都是elastic
點擊discover進入discover界面,就可以看到剛錄取的數據
(溫馨提示,本文檔的ELK項目因各業務不同,給出的部署方案防火牆都處於關閉狀態。具體防火牆配置按各自的業務需求自行配置哦~)
企業級ELK項目的部署已經告一段落,當前部署的ELK只需加大設備規模,足以應付絕大多數的業務需求。下篇文章中,小編將分享一些部署ELK中踩過的坑、稍微深入點的原理、知識點、ELK的使用注意點等等。
睿江雲:www.eflycloud.com