生產JAVA日誌的ELK歸集方案(一)

簡單說明:

  • 開發經常有需求要監控生產tomcat日誌,因此需要一個脫離生產主機的日誌服務器
  • 開發需要監控和查詢的生產日誌一般是實時和近三天內的
  • 生產tomcat實例很多,高負載項目的日誌歸集後較大,需要定期清理歸集的日誌
  • 生產tomcat主機之上的原始日誌會留存較長時間,一般留存一個月左右
  • 因此日誌歸集系統穩定性要求不高,歸集的日誌數據安全性也不高,推薦單機版的ELK架構
    在這裏插入圖片描述
  • 使用redis對日誌做緩存,應對業務高峯期es處理速度不匹配的問題
  • ELK官網:https://www.elastic.co
  • ELK官網官檔:https://www.elastic.co/guide/index.html
  • ELK官方歷史版本包下載地址:https://www.elastic.co/downloads/past-releases#

Elasticsearch 安裝配置:

  • 依據《CentOS7實驗機模板搭建部署》克隆部署一臺虛擬機 192.168.77.110:
# 主機名配置
HOSTNAME=es1
hostnamectl set-hostname "$HOSTNAME"
echo "$HOSTNAME">/etc/hostname
echo "$(grep -E '127|::1' /etc/hosts)">/etc/hosts
echo "$(ip a|grep "inet "|grep -v 127|awk -F'[ /]' '{print $6}') $HOSTNAME">>/etc/hosts

# 安裝java環境
yum -y install java-11-openjdk

# 配置官方yum源
cd /tmp
cat >/etc/yum.repos.d/elasticsearch.repo<<EOF
[ELK-6.x]
name=Elasticsearch repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
# baseurl=https://artifacts.elastic.co/packages/7.x/yum
# elk7.x版本的url如上,當前最新版本是第7版
gpgcheck=0
enabled=1
autorefresh=1
type=rpm-md
EOF
# yum install elasticsearch kibana logstash
# 或者直接下載rpm包部署:
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.8.4.rpm
wget https://artifacts.elastic.co/downloads/kibana/kibana-6.8.4-x86_64.rpm
wget https://artifacts.elastic.co/downloads/logstash/logstash-6.8.4.rpm
###wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.4.2-x86_64.rpm
###wget https://artifacts.elastic.co/downloads/kibana/kibana-7.4.2-x86_64.rpm
###wget https://artifacts.elastic.co/downloads/logstash/logstash-7.4.2.rpm
# 可以使用迅雷拉網速,無需會員,親測很快

# 配置系統服務的ulimit,重啓生效
cat >>/etc/systemd/system.conf<<EOF
DefaultLimitNOFILE=100000
DefaultLimitNPROC=65535
DefaultLimitMEMLOCK=infinity
EOF
reboot
  • 安裝配置 elasticsearch
cd /tmp
yum -y localinstall elasticsearch-6.8.4.rpm
cd /etc/elasticsearch
sed -i 's/^path.data/# &/g' elasticsearch.yml
sed -i 's/^path.logs/# &/g' elasticsearch.yml
cat >>elasticsearch.yml<<EOF
cluster.name: vincent-es
node.name: $(hostname)
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
path.data: /elasticsearch/data
path.logs: /elasticsearch/logs
# discovery.zen.ping.unicast.hosts: ["$(hostname)", "XXX", ...]
EOF
mkdir -pv /elasticsearch/{data,logs}
chown -R elasticsearch: /elasticsearch
# 該目錄生產需要掛接存儲

# 修改jvm參數,設置 Xms=Xmx=物理內存*50%
MEM=$(free -g|grep Mem|awk '{printf "%d\n",$2/2}')
sed -i "s/-Xms1g/-Xms${MEM}g/g" jvm.options
sed -i "s/-Xmx1g/-Xmx${MEM}g/g" jvm.options

# 啓動並測試
systemctl start elasticsearch
systemctl enable elasticsearch
systemctl status elasticsearch
netstat -lntup|grep 9200
curl http://$(hostname -i):9200
  • 安裝elasticsearch的head插件
cd /tmp
wget https://nodejs.org/dist/v13.1.0/node-v13.1.0-linux-x64.tar.xz
cd /usr/local/
tar -xf /tmp/node-v13.1.0-linux-x64.tar.xz
chown -R root: node-v13.1.0-linux-x64/
echo 'export NODE_HOME=/usr/local/node-v13.1.0-linux-x64'>>/etc/profile
echo 'export PATH=$NODE_HOME/bin:$PATH'>>/etc/profile
source /etc/profile
node -v
npm -v

cd /usr/local
yum -y install git bzip2
git clone https://github.com/mobz/elasticsearch-head.git
cd elasticsearch-head/
# 設置淘寶鏡像源加速安裝
npm config set registry https://registry.npm.taobao.org
cat >>~/.npmrc<<EOF
sass_binary_site = https://npm.taobao.org/mirrors/node-sass/
phantomjs_cdnurl = https://npm.taobao.org/mirrors/phantomjs/
EOF
npm install

cd /etc/elasticsearch/
cat >>elasticsearch.yml<<EOF
http.cors.enabled: true
http.cors.allow-origin: "*"
EOF
systemctl restart elasticsearch
cd /usr/local/elasticsearch-head/
npm run start &
# 網頁訪問 http://IP:9100/ 網頁上操作連接 http://IP:9200

# 配置head插件的開機啓動
cat >/root/checkOS/elasticsearch-headStart.sh<<EOF
#!/bin/bash
source /etc/profile
cd /usr/local/elasticsearch-head/
/usr/local/node-v13.1.0-linux-x64/bin/npm run start &
EOF
chmod +x /root/checkOS/elasticsearch-headStart.sh
echo '/root/checkOS/elasticsearch-headStart.sh'>>/etc/rc.local
  • 配置elasticsearch的數據清理腳本和自動任務
# index的命名要符合 %{+YYYY.MM.dd} 規則
cat >/root/checkOS/elasticsearchCleanIndex.sh<<EOF
#!/bin/bash
source /etc/profile
DT=\$(date +%Y.%m.%d -d'3 day ago')
for index in \$(curl -s -XGET 'http://127.0.0.1:9200/_cat/indices/?v'|awk '{print \$3}'|grep \${DT})
do
  curl -XDELETE "http://127.0.0.1:9200/\${index}"
done
EOF
chmod +x /root/checkOS/elasticsearchCleanIndex.sh
crontab -l>/tmp/crontab.tmp
echo -e '\n# Elasticsearch Clean Index'>>/tmp/crontab.tmp
echo '0 0 * * * /bin/bash /root/checkOS/elasticsearchCleanIndex.sh'>>/tmp/crontab.tmp
cat /tmp/crontab.tmp |crontab
rm -rf /tmp/crontab.tmp

Kibana安裝配置

  • 將Kibana直接部署在Elasticsearch主機之上
# 安裝配置啓動kibana
cd /tmp/
yum -y localinstall kibana-6.8.4-x86_64.rpm
cd /etc/kibana/
cat >>kibana.yml<<EOF
server.host: "0.0.0.0"
server.port: 5601
server.name: "$(hostname)"
elasticsearch.hosts: ["http://$(hostname -i):9200"]
EOF
systemctl start kibana
systemctl enable kibana
systemctl status kibana
netstat -lntup|grep 5601
# 瀏覽器訪問 http://192.168.77.110:5601/

Redis安裝配置

  • 依據《Redis4.0 單實例安裝配置》克隆一臺虛擬機部署redis 192.168.77.100
  • 建議部署安裝最新5.X版 http://download.redis.io/releases/redis-5.0.6.tar.gz

Web實驗機模擬

  • 依據《CentOS7實驗機模板搭建部署》部署一臺實驗機 192.168.77.10
  • 創建目錄,模擬catalina.out文件的寫入
HOSTNAME=web
hostnamectl set-hostname "$HOSTNAME"
echo "$HOSTNAME">/etc/hostname
echo "$(grep -E '127|::1' /etc/hosts)">/etc/hosts
echo "$(ip a|grep "inet "|grep -v 127|awk -F'[ /]' '{print $6}') $HOSTNAME">>/etc/hosts

mkdir -pv /web/tomcat8_8080_pro1/logs
mkdir -pv /web/tomcat8_8081_pro2/logs
mkdir -pv /web/tomcat8_8082_pro3/logs
mkdir -pv /web/tomcat8_8083_pro4/logs
for i in $(seq 10000)
do
  cat /tmp/catalina.out>>/web/tomcat8_8080_pro1/logs/catalina.out
  cat /tmp/catalina.out>>/web/tomcat8_8081_pro2/logs/catalina.out
  cat /tmp/catalina.out>>/web/tomcat8_8082_pro3/logs/catalina.out
  cat /tmp/catalina.out>>/web/tomcat8_8083_pro4/logs/catalina.out
  sleep 5
done &

Logstash部署配置

  • 在Web實驗機上安裝配置Logstash,將數據發送到redis中
cd /tmp
yum -y install java-11-openjdk
yum -y localinstall logstash-6.8.4.rpm
echo 'export LS_HOME=/usr/share/logstash'>>/etc/profile
echo 'export PATH=$LS_HOME/bin:$PATH'>>/etc/profile
source /etc/profile
# 修改logstash啓動的jvm所佔用的內存
sed -i 's/^-Xms1g/-Xms1g/g' /etc/logstash/jvm.options
sed -i 's/^-Xmx1g/-Xmx1g/g' /etc/logstash/jvm.options
# 創建配置文件,啓動logstash將本地文件信息發送到redis中
mkdir /usr/share/logstash/conf
cd /usr/share/logstash/conf
cat >file2redis.conf<<EOF
input {
  file {
    path => "/web/tomcat8_8080_pro1/logs/catalina.out"
    type => "$(hostname -i)-8080-pro1"
    start_position => "beginning"
    discover_interval => 3 
    codec => multiline {
      pattern => "^[^[:blank:]]"
      negate => true
      what => "previous"
    }
  }
  file {
    path => "/web/tomcat8_8081_pro2/logs/catalina.out"
    type => "$(hostname -i)-8081-pro2"
    start_position => "beginning"
    discover_interval => 3 
    codec => multiline {
      pattern => "^[^[:blank:]]"
      negate => true
      what => "previous"
    }
  }
  file {
    path => "/web/tomcat8_8082_pro3/logs/catalina.out"
    type => "$(hostname -i)-8082-pro3"
    start_position => "beginning"
    discover_interval => 3 
    codec => multiline {
      pattern => "^[^[:blank:]]"
      negate => true
      what => "previous"
    }
  }
  file {
    path => "/web/tomcat8_8083_pro4/logs/catalina.out"
    type => "$(hostname -i)-8083-pro4"
    start_position => "beginning"
    discover_interval => 3 
    codec => multiline {
      pattern => "^[^[:blank:]]"
      negate => true
      what => "previous"
    }
  }
}
output {
  if [type] == "$(hostname -i)-8080-pro1" {
    redis {
      host => "redis"
      port => 7000
      data_type => "list"
      key => "$(hostname -i)-8080-pro1"
    }
  }
  if [type] == "$(hostname -i)-8081-pro2" {
    redis {
      host => "redis"
      port => 7000
      data_type => "list"
      key => "$(hostname -i)-8081-pro2"
    }
  }
  if [type] == "$(hostname -i)-8082-pro3" {
    redis {
      host => "redis"
      port => 7000
      data_type => "list"
      key => "$(hostname -i)-8082-pro3"
    }
  }
  if [type] == "$(hostname -i)-8083-pro4" {
    redis {
      host => "redis"
      port => 7000
      data_type => "list"
      key => "$(hostname -i)-8083-pro4"
    }
  }
}
EOF
# 其中codec部分將java的報錯日誌整合成一條信息
echo '192.168.77.100 redis'>>/etc/hosts
/usr/share/logstash/bin/logstash -f file2redis.conf &
  • logstash 開機自動啓動配置:
echo 'source /etc/profile;$LS_HOME/bin/logstash -f $LS_HOME/conf/file2redis.conf &'>>/etc/rc.local
  • redis主機進行測試:
echo 'keys *' |redis-cli -h 192.168.77.100 -p 7000
# 1) "192.168.77.10-8080-pro1"
# 2) "192.168.77.10-8081-pro2"
# 3) "192.168.77.10-8083-pro4"
# 4) "192.168.77.10-8082-pro3"
# 顯示出test.conf文件中output標籤配置的四個key則表示成功
for i in $(echo 'keys *' |redis-cli -h 192.168.77.100 -p 7000)
do
  echo "llen $i"|redis-cli -h 192.168.77.100 -p 7000
done
# 通過查看對應key的長度來確定消息是否堆積,es的處理是否及時
  • 在Redis主機之上安裝配置Logstash,將數據發送到elasticsearch中
cd /tmp
yum -y install java-11-openjdk
yum -y localinstall logstash-6.8.4.rpm
echo 'export LS_HOME=/usr/share/logstash'>>/etc/profile
echo 'export PATH=$LS_HOME/bin:$PATH'>>/etc/profile
source /etc/profile
# 修改logstash啓動的jvm所佔用的內存
sed -i 's/^-Xms1g/-Xms1g/g' /etc/logstash/jvm.options
sed -i 's/^-Xmx1g/-Xmx1g/g' /etc/logstash/jvm.options

# 創建配置文件,啓動logstash將本地文件信息發送到redis中
mkdir /usr/share/logstash/conf
cd /usr/share/logstash/conf
cat >redis2es.conf<<EOF
input {
  redis {
    host => "$(hostname -i)"
    port => 7000
    type => "192.168.77.10-8080-pro1"
    data_type => "list"
    key  => "192.168.77.10-8080-pro1"
  }
  redis {
    host => "$(hostname -i)"
    port => 7000
    type => "192.168.77.10-8081-pro2"
    data_type => "list"
    key  => "192.168.77.10-8081-pro2"
  }
  redis {
    host => "$(hostname -i)"
    port => 7000
    type => "192.168.77.10-8082-pro3"
    data_type => "list"
    key  => "192.168.77.10-8082-pro3"
  }
  redis {
    host => "$(hostname -i)"
    port => 7000
    type => "192.168.77.10-8083-pro4"
    data_type => "list"
    key  => "192.168.77.10-8083-pro4"
  }
}
output {
  if [type] == "192.168.77.10-8080-pro1" {
    elasticsearch {
      hosts => "192.168.77.110"
      index => "192.168.77.10-8080-pro1-%{+YYYY.MM.dd}"
    }
  }
  if [type] == "192.168.77.10-8081-pro2" {
    elasticsearch {
      hosts => "192.168.77.110"
      index => "192.168.77.10-8081-pro2-%{+YYYY.MM.dd}"
    }
  }
  if [type] == "192.168.77.10-8082-pro3" {
    elasticsearch {
      hosts => "192.168.77.110"
      index => "192.168.77.10-8082-pro3-%{+YYYY.MM.dd}"
    }
  }
  if [type] == "192.168.77.10-8083-pro4" {
    elasticsearch {
      hosts => "192.168.77.110"
      index => "192.168.77.10-8083-pro4-%{+YYYY.MM.dd}"
    }
  }
}
EOF
# index命名要符合elasticsearch中部署的index清理腳本要求
/usr/share/logstash/bin/logstash -f redis2es.conf &
  • logstash 開機自動啓動配置:
echo 'source /etc/profile;$LS_HOME/bin/logstash -f $LS_HOME/conf/redis2es.conf &'>>/etc/rc.local
  • redis主機驗證:
for i in $(echo 'keys *' |redis-cli -h 192.168.77.100 -p 7000)
do
  echo "llen $i"|redis-cli -h 192.168.77.100 -p 7000
done
# 通過查看對應key的長度變化來確定redis寫入es是否成功
  • es主機驗證:
curl -s -XGET 'http://127.0.0.1:9200/_cat/indices/?v'|column -t 
# 觀察docs.count不斷增加,store.size不斷增大,表示redis寫入es成功
# 也可以瀏覽器打開head插件頁面來確認

使用Kibana實時監控日誌

  • 瀏覽器訪問:http://192.168.77.110:5601
    在這裏插入圖片描述
    在這裏插入圖片描述
    在這裏插入圖片描述

[TOC]

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章