因爲業務需要要將線上elasticsearch-2.3升級到最新版本,官方稱7.2.0版本比之前版本運行速度和效率有質的飛躍!
我們將安裝elasticsearch-7.2.0配套的環境
一,首先去各自官網下載相應的文件:
elasticsearch-7.2.0
elasticsearch-head-master
kibana-7.2.0-linux-x86_64
logstash-7.2.0
elasticsearch-analysis-ik-7.2.0
node-v8.16.0-linux-x64.tar
在此我就不貼官網地址了,給大家提供個百度雲盤我下載好的鏈接
鏈接:https://pan.baidu.com/s/1qUSDGHaIRHfjjyfyZ7A9Pg
提取碼:e54v
二,安裝前準備
1,因爲elasticsearch不允許使用root賬戶啓動,所以我們首先創建用戶
useradd elk
2,更改系統資源限制
vim /etc/security/limits.conf
添加如下參數:
* soft nofile 65536
* hard nofile 65536
* soft nproc 2048
* hard nproc 4096
vim /etc/sysctl.conf
添加:
vm.max_map_count=655360
使用如下命令使參數生效
sysctl -p
三,安裝部署
1,安裝elasticsearch
因提供的都是二進制包,無需編譯,可以直接使用
解壓
tar -xvf elasticsearch-7.2.0
將文件放到合適的地方
mv elasticsearch-7.2.0 /home/elk/
將文件所有者更改爲elk
chown -R elk.elk /home/elk/elasticsearch-7.2.0
編輯elasticsearch配置文件:
vim elasticsearch.yml
cluster.name: xavito
transport.tcp.compress: true
cluster.initial_master_nodes: ["vito248","vito203"]
discovery.seed_hosts: ["192.168.1.248", "192.168.1.203"]
node.name: vito203
http.port: 9200
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
network.host: 0.0.0.0
http.cors.enabled: true
http.cors.allow-origin: "*"
其中我這是兩個節點的集羣,如果是單節點cluster參數就不用配置了,如果是集羣要主要集羣的寫法,有別與之前的版本
寫錯了就會到導致服務起不來,或者後續elasticsearch-head-master在網頁上連接不上。
同時也可以指定數據文件和日誌文件路徑,默認情況下在當前主文件下
#path.data: /path/to/data
#path.logs: /path/to/logs
直接用elk用戶啓動即可
su elk
/home/elk/elasticsearch-7.2.0/elasticsearch -d (-d爲後臺運行)
啓動後可以用curl或者瀏覽其訪問9200端口:
表示啓動已經成功了
2,安裝elasticsearch-head-master.zip
解壓:
unzip elasticsearch-head-master.zip
mv elasticsearch-head-master /home/elk/
chown -R elk.elk /home/elk/elasticsearch-head-maste
安裝:
首先必須有node.js,纔可以安裝,沒有的話安裝下,node只要在環境變量裏配置好就行了
安裝node如下:
將安裝包上傳到指定位置(我習慣放到:/usr/local/application/目錄),並解壓
tar -xvf node-v10.6.0-linux-x64.tar.xz
重命名文件夾
1 mv node-v10.6.0-linux-x64 nodejs
通過建立軟連接變爲全局
ln -s /usr/local/application/nodejs/bin/npm /usr/local/bin/
ln -s /usr/local/application/nodejs/bin/node /usr/local/bin/
檢查是否安裝成功,命令:node-v
node -v
v10.6.0
已安裝的上面步驟略過
cd /home/elk/elasticsearch-head-maste
執行 npm install
執行 npm run start (npm run start &)
netstat -nltp
看到9100端口開啓,表示安裝成功了
可以在瀏覽器登錄
3、安裝kibana-7.2.0-linux-x86_64.tar.gz,在7.2版本中官方直接支持中文,在配置文件中即可設置
解壓
tar -xvf kibana-7.2.0-linux-x86_64.tar.gz
mv kibana-7.2.0-linux-x86_64 /home/elk/
chown -R elk.elk /home/elk/kibana-7.2.0-linux-x86_64
vim kibana.yml
在配置文件中添加如下參數:
server.host: "192.168.1.248"
i18n.locale: "zh-CN"
即可支持中文
4,安裝elasticsearch-analysis-ik-7.2.0.zip中文分詞器
unzip elasticsearch-analysis-ik-7.2.0.zip
chown -R elk.elk /home/elk/elasticsearch-analysis-ik-7.2.0
直接加載到elasticsearch的 plugins下
mv elasticsearch-analysis-ik-7.2.0 /home/elk/elasticsearch-7.2.0/plugins/ik
重新啓動elasticsearch即可
5,安裝logstash-7.2.0.tar.gz
解壓:
tar -xvf logstash-7.2.0.tar.gz
mv logstash-7.2.0 /home/elk/
chown -R elk.elk /home/elk/ logstash-7.2.0
根據實際需要編輯配置文件:
例如:
vim format.cnf
input {
file {
path => "/mnt/new/tomcat/LOGS/WebLogs/user/*.log"
type => "userlog"
start_position => "beginning"
sincedb_path => "/root/logstash-7.2.0/sincedb"
}
file {
path => "/mnt/new/tomcat/LOGS/*/output/*.log"
type => "syslog"
codec => multiline {
pattern => "^\d"
negate => true
what => "previous"
}
start_position => "beginning"
sincedb_path => "/root/logstash-7.2.0/sincedb"
}
file {
path => "/mnt/new/tomcat/LOGS/*/pay/*.log"
type => "pay"
codec => multiline {
pattern => "^\d"
negate => true
what => "previous"
}
start_position => "beginning"
sincedb_path => "/root/logstash-7.2.0/sincedb"
}
}
filter{
if [type] == "userlog" {
grok {
patterns_dir => "/root/logstash-7.2.0/reg"
match => {
"message" => "%{logdatetime:times} %{MSG_:msg}"
}
}
json {
source => "msg"
target => "msg"
}
if [msg][uri]=="/base/msg/tips/queryMsgTipsByUser.htm"{
drop {}
}
if [msg][ip] !~ "^127\.|^192\.168\.|^172\.1[6-9]\.|^172\.2[0-9]\.|^172\.3[01]\.|^10\." {
geoip {
source => "[msg][ip]"
database => "/root/logstash-7.2.0/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.1-java/vendor/GeoLite2-City.mmdb"
target => "geoip"
fields => ["region_name","city_name","location"]
}
}
useragent {
source => "[msg][ua]"
target => "ua"
}
date {
locale => "cn"
match => ["times", "yyyy-MM-dd HH:mm:ss.SSS","UNIX"]
target => "@timestamp"
}
mutate {
remove_field => "@version"
remove_field => "message"
remove_field => "[msg][ua]"
remove_field => "times"
}
}
if [type] == "syslog" {
grok {
patterns_dir => "/root/logstash-7.2.0/reg"
match=>["message","%{logdatetime:times} .*%{LOGLEVEL:loglevel } {0,4}%{CLASSNAME:classname} {0,4}- {0,4}%{INFOMSG:infomsg}"]
}
if [classname] =~"^com.alibaba.dubbo.config.AbstractConfig" {
drop {}
}
if [classname] =~"^org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping" {
drop {}
}
if [classname] =~"^o.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping" {
drop {}
}
if [loglevel] == "DEBUG" {
drop {}
}
date {
locale => "cn"
match => ["times", "yyyy-MM-dd HH:mm:ss.SSS","UNIX"]
target => "@timestamp"
}
mutate {
remove_field => "@version"
remove_field => "times"
remove_field => "message"
remove_field => "tags"
}
}
if [type] == "pay" {
grok {
patterns_dir => "/root/logstash-7.2.0/reg"
match=>["message","%{logdatetime:times} .*%{LOGLEVEL:loglevel } {0,4}%{CLASSNAME:classname} {0,4}- {0,4}%{INFOMSG:infomsg}"]
}
date {
locale => "cn"
match => ["times", "yyyy-MM-dd HH:mm:ss.SSS","UNIX"]
target => "@timestamp"
}
mutate {
remove_field => "@version"
remove_field => "times"
remove_field => "message"
remove_field => "tags"
}
}
}
output {
elasticsearch {
hosts => ["192.168.1.248:9200","192.168.1.203:9200"]
codec => "json"
index => "log-%{type}-%{+YYYYMM}"
manage_template => true
template_overwrite => true
template_name => "my_logstash"
template => "/root/logstash-7.2.0/bin/logstash.json"
}
}
啓動:
/home/elk/ logstash-7.2.0/bin/logstash -f /path/format.cnf
到此安裝完成!