1.安裝ElasticSearch
1.1準備工作
1.1.1節點介紹
192.168.1.21、192.168.1.22、192.168.1.23
1.1.2拉取鏡像(三臺虛擬機都執行)
docker pull elasticsearch:6.7.0
1.2.宿主機配置(三臺虛擬機都執行)
1.2.1創建文件夾
mkdir -p /data/server/elasticsearch/config
mkdir -p /data/server/elasticsearch/data
mkdir -p /data/server/elasticsearch/plugins/ik
chmod 777 /data/server/elasticsearch/plugins/ik
chmod 777 /data/server/elasticsearch/data
chmod 777 /data/server/elasticsearch/config
1.2.2修改配置sysctl.conf
vi /etc/sysctl.conf
添加下面配置:
vm.max_map_count=655360
執行命令:
sysctl -p
1.2.3修改配置limits.conf
vi /etc/security/limits.conf
添加下面配置:
* soft nofile 65536
* hard nofile 65536
* soft nproc 4096
* hard nproc 4096
1.2.4修改配置20-nproc.conf
vim /etc/security/limits.d/20-nproc.conf
添加下面配置:
* soft nproc 4096
1.3.準備配置文件
1.3.1節點1(192.168.1.21)
vi /data/server/elasticsearch/config/elasticsearch.yml
#集羣名
cluster.name: ESCluster
#節點名
node.name: node-1
#設置綁定的ip地址,可以是ipv4或ipv6的,默認爲0.0.0.0
#指綁定這臺機器的任何一個ip
network.bind_host: 0.0.0.0
#設置其它節點和該節點交互的ip地址,如果不設置它會自動判斷
#值必須是個真實的ip地址
network.publish_host: 192.168.1.21
#設置對外服務的http端口,默認爲9200
http.port: 9200
#設置節點之間交互的tcp端口,默認是9300
transport.tcp.port: 9300
#是否允許跨域REST請求
http.cors.enabled: true
#允許 REST 請求來自何處
http.cors.allow-origin: "*"
#節點角色設置
node.master: true
node.data: true
#有成爲主節點資格的節點列表
discovery.zen.ping.unicast.hosts: ["192.168.1.21:9300","192.168.1.22:9300","192.168.1.23:9300"]
#集羣中一直正常運行的,有成爲master節點資格的最少節點數(默認爲1)
# (totalnumber of master-eligible nodes / 2 + 1)
discovery.zen.minimum_master_nodes: 2
1.3.2節點2(192.168.1.22)
vi /data/server/elasticsearch/config/elasticsearch.yml
#集羣名
cluster.name: ESCluster
#節點名
node.name: node-2
#設置綁定的ip地址,可以是ipv4或ipv6的,默認爲0.0.0.0
#指綁定這臺機器的任何一個ip
network.bind_host: 0.0.0.0
#設置其它節點和該節點交互的ip地址,如果不設置它會自動判斷
#值必須是個真實的ip地址
network.publish_host: 192.168.1.22
#設置對外服務的http端口,默認爲9200
http.port: 9200
#設置節點之間交互的tcp端口,默認是9300
transport.tcp.port: 9300
#是否允許跨域REST請求
http.cors.enabled: true
#允許 REST 請求來自何處
http.cors.allow-origin: "*"
#節點角色設置
node.master: true
node.data: true
#有成爲主節點資格的節點列表
discovery.zen.ping.unicast.hosts: ["192.168.1.21:9300","192.168.1.22:9300","192.168.1.23:9300"]
#集羣中一直正常運行的,有成爲master節點資格的最少節點數(默認爲1)
# (totalnumber of master-eligible nodes / 2 + 1)
discovery.zen.minimum_master_nodes: 2
1.3.3節點3(192.168.1.23)
vi /data/server/elasticsearch/config/elasticsearch.yml
#集羣名
cluster.name: ESCluster
#節點名
node.name: node-3
#設置綁定的ip地址,可以是ipv4或ipv6的,默認爲0.0.0.0
#指綁定這臺機器的任何一個ip
network.bind_host: 0.0.0.0
#設置其它節點和該節點交互的ip地址,如果不設置它會自動判斷
#值必須是個真實的ip地址
network.publish_host: 192.168.1.23
#設置對外服務的http端口,默認爲9200
http.port: 9200
#設置節點之間交互的tcp端口,默認是9300
transport.tcp.port: 9300
#是否允許跨域REST請求
http.cors.enabled: true
#允許 REST 請求來自何處
http.cors.allow-origin: "*"
#節點角色設置
node.master: true
node.data: true
#有成爲主節點資格的節點列表
discovery.zen.ping.unicast.hosts: ["192.168.1.21:9300","192.168.1.22:9300","192.168.1.23:9300"]
#集羣中一直正常運行的,有成爲master節點資格的最少節點數(默認爲1)
# (totalnumber of master-eligible nodes / 2 + 1)
discovery.zen.minimum_master_nodes: 2
1.4.配置ik中文分詞器(三臺虛擬機都執行)
去GitHub頁面下載對應的ik分詞zip包:https://github.com/medcl/elasticsearch-analysis-ik/releases
把ik壓縮包複製到你的liunx系統/data/server/elasticsearch/plugins/路徑下
cd /data/server/elasticsearch/plugins/
unzip -d /data/server/elasticsearch/plugins/ik/ elasticsearch-analysis-ik-6.7.0.zip
最後把elasticsearch-analysis-ik-x.x.x.zip 刪除
rm -rf /data/server/elasticsearch/plugins/elasticsearch-analysis-ik-6.7.0.zip
1.5.創建容器並運行(三臺虛擬機都執行)
docker run -m 8G --cpus 3 -d --name es --restart=always -v /etc/localtime:/etc/localtime:ro -p 9200:9200 -p 9300:9300 -v /data/server/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /data/server/elasticsearch/data:/usr/share/elasticsearch/data -v /data/server/elasticsearch/plugins:/usr/share/elasticsearch/plugins --privileged=true elasticsearch:6.7.0
查看ik分詞是否加載成功:docker logs es
測試Elasticsearch是否啓動成功 http://ip:9200 (ip爲自己虛擬機的ip)
1.6.安裝head插件
docker pull mobz/elasticsearch-head:5
docker run -m 8G --cpus 3 --name eshead -p 9100:9100 -d docker.io/mobz/elasticsearch-head:5
head可視化界面 http://ip:9100/
2.安裝Kibana
2.1準備鏡像
docker pull kibana:6.7.0
2.2啓動容器
docker run -m 8G --cpus 3 --name kibana -e ELASTICSEARCH_HOSTS=http://192.168.1.21:9200 -p 5601:5601 -d kibana:6.7.0
2.3漢化界面(這個隨意)
cd /root
wget https://github.com/anbai-inc/Kibana_Hanization/archive/master.zip
mv Kibana_Hanization-master/ master
docker cp master kibana:/
rm -rf master*
docker exec -it kibana /bin/bash
cd /master
cp -r translations /opt/kibana/src/legacy/core_plugins/kibana/
chown -R kibana:kibana /opt/kibana/src/legacy/core_plugins/kibana/
vi /usr/share/kibana/config/kibana.yml,添加配置:i18n.locale: "zh-CN"
按Ctrl+P+Q退出容器,重啓docker restart kibana,即可看到漢化好的界面
3.安裝Logstash
3.1拉取鏡像
docker pull logstash:6.7.0
3.2創建文件夾
mkdir -p /data/server/logstash/config
chmod 777 /data/server/logstash/config
mkdir -p /data/server/logstash/plugin
chmod 777 /data/server/logstash/plugin
mkdir -p /data/server/logstash/pipeline
chmod 777 /data/server/logstash/pipeline
3.3創建配置文件logstash.yml
vim /data/server/logstash/config/logstash.yml
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.hosts: http://192.168.1.21:9200
3.4準備GeoLite2-City.mmdb
cd /data/server/logstash/config/
wget http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.mmdb.gz
gunzip GeoLite2-City.mmdb.gz
chmod 777 -R /data/server/logstash
3.5創建配置文件logstash.conf
vim /data/server/logstash/pipeline/logstash.conf
input {
file {
path => "/data/server/logs/coep-rest/behavior.log"
start_position => beginning
type => restbehavior
codec => json {
charset => "UTF-8"
}
add_field => {"serverIp"=>"192.168.1.21"}
}
file {
path => "/data/server/logs/coep-web/behavior.log"
start_position => beginning
type => adminbehavior
codec => json {
charset => "UTF-8"
}
add_field => {"serverIp"=>"192.168.1.21"}
}
file {
path => "/data/server/logs/coew/behavior.log"
start_position => beginning
type => webbehavior
codec => json {
charset => "UTF-8"
}
add_field => {"serverIp"=>"192.168.1.21"}
}
}
filter {
if [type] == "restbehavior"{
geoip {
source => "sourceIp"
target => "geoip"
database => "/data/server/logstash/config/GeoLite2-City.mmdb"
add_field => ["[geoip][coordinates]", "%{[geoip][longitude]}"]
add_field => ["[geoip][coordinates]", "%{[geoip][latitude]}"]
}
mutate {
convert => [ "[geoip][coordinates]", "float"]
}
}
if [type] == "adminbehavior"{
geoip {
source => "sourceIp"
target => "geoip"
database => "/data/server/logstash/config/GeoLite2-City.mmdb"
add_field => ["[geoip][coordinates]", "%{[geoip][longitude]}"]
add_field => ["[geoip][coordinates]", "%{[geoip][latitude]}"]
}
mutate {
convert => [ "[geoip][coordinates]", "float"]
}
}
if [type] == "webbehavior"{
geoip {
source => "sourceIp"
target => "geoip"
database => "/data/server/logstash/config/GeoLite2-City.mmdb"
add_field => ["[geoip][coordinates]", "%{[geoip][longitude]}"]
add_field => ["[geoip][coordinates]", "%{[geoip][latitude]}"]
}
mutate {
convert => [ "[geoip][coordinates]", "float"]
}
}
}
output {
if [type] == "restbehavior"{
elasticsearch {
hosts => "192.168.1.21:9200"
index => "logstash-restbehavior-%{+YYYY.MM.dd}"
}
}
if [type] == "adminbehavior"{
elasticsearch {
hosts => "192.168.1.21:9200"
index => "logstash-adminbehavior-%{+YYYY.MM.dd}"
}
}
if [type] == "webbehavior"{
elasticsearch {
hosts => "192.168.1.21:9200"
index => "logstash-webbehavior-%{+YYYY.MM.dd}"
}
}
}
3.6啓動容器
docker run -m 8G --cpus 3 --name logstash -v /data/server/logs:/data/server/logs -v /data/server/logstash/pipeline:/usr/share/logstash/pipeline -v /data/server/logstash/config/GeoLite2-City.mmdb:/data/server/logstash/config/GeoLite2-City.mmdb -v /data/server/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml -v /data/server/logstash/plugin:/plugin -p 5000:5000 -p 5044:5044 -p 9600:9600 --privileged=true -d logstash:6.7.0 -f /usr/share/logstash/pipeline/logstash.conf
參考資料:
logstash 6.x 模版 json 文件
https://github.com/logstash-plugins/logstash-output-elasticsearch/blob/master/lib/logstash/outputs/elasticsearch/elasticsearch-template-es6x.json
logstash 6.x geoip 官方配置
https://www.elastic.co/cn/blog/geoip-in-the-elastic-stack