一、前言
1、ELK簡介
ELK是Elasticsearch+Logstash+Kibana的簡稱
-
ElasticSearch是一個基於Lucene的分佈式全文搜索引擎,提供 RESTful API進行數據讀寫
-
Logstash是一個收集,處理和轉發事件和日誌消息的工具
- Kibana是Elasticsearch的開源數據可視化插件,爲查看存儲在ElasticSearch提供了友好的Web界面,並提供了條形圖,線條和散點圖,餅圖和地圖等分析工具
總的來說,ElasticSearch負責存儲數據,Logstash負責收集日誌,並將日誌格式化後寫入ElasticSearch,Kibana提供可視化訪問ElasticSearch數據的功能。
2、ELK工作流
應用將日誌按照約定的Key寫入Redis,Logstash從Redis中讀取日誌信息寫入ElasticSearch集羣。Kibana讀取ElasticSearch中的日誌,並在Web頁面中以表格/圖表的形式展示。
二、準備工作
1、服務器&軟件環境說明
- 服務器
一共準備3臺Ubuntu18.04 Server
服務器名 | IP | 說明 |
---|---|---|
es1 | 192.168.1.69 | 部署ElasticSearch主節點 |
es2 | 192.168.1.70 | 部署ElasticSearch從節點 |
elk | 192.168.1.71 | 部署Logstash + Kibana |
這裏爲了節省,只部署2臺Elasticsearch,並將Logstash + Kibana + Redis部署在了一臺機器上。
如果在生產環境部署,可以按照自己的需求調整。
- 軟件環境
項 | 說明 |
---|---|
Linux Server | Ubuntu 18.04 |
Elasticsearch | 7.2.0 |
Logstash | 7.2.0 |
Kibana | 7.2.0 |
JDK | 11.0.2 |
2、ELK環境準備
由於Elasticsearch、Logstash、Kibana均不能以root賬號運行。
但是Linux對非root賬號可併發操作的文件、線程都有限制。
所以,部署ELK相關的機器都要調整:
- 修改文件以及進程數限制以及最大併發數限制
# 修改系統文件
sudo vim /etc/security/limits.conf
#增加的內容
* soft nofile 65536
* hard nofile 65536
* soft nproc 4096
* hard nproc 4096
- 修改虛擬內存大小限制
sudo vim /etc/sysctl.conf
#添加下面配置
vm.max_map_count=655360
以上操作重啓系統後生效(各系統的限制不一樣,不同的硬件環境不同的軟件版本都會不一樣,請根據elasticsearch啓動報錯來精確設置)
sudo reboot now
- 下載ELK包並解壓
https://www.elastic.co/cn/downloads
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.2.0-linux-x86_64.tar.gz
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.2.0.tar.gz
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.2.0-linux-x86_64.tar.gz
三、Elasticsearch 安裝部署
本次一共要部署兩個Elasticsearch節點,所有文中沒有指定機器的操作都表示每個Elasticsearch機器都要執行該操作
1、準備工作
- 解壓
tar -zxvf elasticsearch-7.2.0-linux-x86_64.tar.gz
2、Elasticsearch 配置
- 修改配置
vim config/elasticsearch.yml
- 主節點配置(192.168.1.31)
cluster.name: es
node.name: es1
path.data: /home/rock/elasticsearch-7.2.0/data
path.logs: /home/rock/elasticsearch-7.2.0/logs
network.host: 192.168.1.69
http.port: 9200
transport.tcp.port: 9300
node.master: true
node.data: true
discovery.zen.ping.unicast.hosts: ["192.168.1.69:9300","192.168.1.70:9300"]
discovery.zen.minimum_master_nodes: 1
- 從節點配置(192.168.1.32)
cluster.name: es
node.name: es2
path.data: /home/rock/elasticsearch-7.2.0/data
path.logs: /home/rock/elasticsearch-7.2.0/logs
network.host: 192.168.1.70
http.port: 9200
transport.tcp.port: 9300
node.master: false
node.data: true
discovery.zen.ping.unicast.hosts: ["192.168.1.69:9300","192.168.1.70:9300"]
discovery.zen.minimum_master_nodes: 1
- 配置項說明
項 | 說明 |
---|---|
cluster.name | 集羣名 |
node.name | 節點名 |
path.data | 數據保存目錄 |
path.logs | 日誌保存目錄 |
network.host | 節點host/ip |
http.port | HTTP訪問端口 |
transport.tcp.port | TCP傳輸端口 |
node.master | 是否允許作爲主節點 |
node.data | 是否保存數據 |
discovery.zen.ping.unicast.hosts | 集羣中的主節點的初始列表,當節點(主節點或者數據節點)啓動時使用這個列表進行探測 |
discovery.zen.minimum_master_nodes | 主節點個數 |
2、Elasticsearch啓動&健康檢查
- 啓動
#進入elasticsearch根目錄
cd /home/rock/elasticsearch-7.2.0
#啓動
./bin/elasticsearch
- 查看健康狀態
{
"cluster_name": "es",
"status": "green",
"timed_out": false,
"number_of_nodes": 2,
"number_of_data_nodes": 2,
"active_primary_shards": 0,
"active_shards": 0,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0,
"delayed_unassigned_shards": 0,
"number_of_pending_tasks": 0,
"number_of_in_flight_fetch": 0,
"task_max_waiting_in_queue_millis": 0,
"active_shards_percent_as_number": 100
}
如果返回status=green表示正常
四、Logstash 配置
- 配置數據&日誌目錄&主目錄&jvm啓動項&pipelines
#打開目錄
cd /home/rock/logstash-7.2.0
#修改配置
vim config/startup.options
#修改爲以下內容
LS_HOME=/home/rock/logstash-7.2.0
LS_SETTINGS_DIR=/home/rock/logstash-7.2.0/config
#修改配置
vim config/logstash.yml
#增加以下內容
path.data: /home/rock/logstash-7.2.0/data
path.logs: /home/rock/logstash-7.2.0/logs
#修改配置
vim config/jvm.options
#修改如下配置
-Xms512m #修改成最適合您的配置
-Xmx512m #修改成最合適您的配置
#修改垃圾收集器策略 默認爲CMS,改用G1
#-XX:+UseConcMarkSweepGC
#-XX:CMSInitiatingOccupancyFraction=75
#-XX:+UseCMSInitiatingOccupancyOnly
-XX:+UseG1GC
#修改配置
vim config/pipelines.yml
#添加如下配置
- pipeline.id: my_pipeline_name
path.config: "/home/rock/logstash-7.2.0/config/logstash.conf"
queue.type: persisted
- 配置LogStash出入口
#編輯文件
vim config/logstash.conf
#添加如下內容
input {
tcp {
port => 12345
codec => json
}
}
filter {
}
output {
elasticsearch {
hosts => ["192.168.1.69:9200","192.168.1.70:9200"]
index => "logstash-%{+YYYY.MM.dd}"
}
stdout {
}
}
五、Kibana 配置
修改配置
#進入目錄
cd /home/rock/kibana-7.2.0-linux-x86_64
#配置信息
vim config/kibana.yml
#增加如下信息
server.port: 5601
server.host: "192.168.1.71"
elasticsearch.hosts: ["http://192.168.1.69:9200","http://192.168.1.70:9200"]
- 啓動
bin/kibana
瀏覽器訪問: 192.168.1.71:5601
六、log4j部署
配置文件log4j.xml。自定義日誌級別。自定義json解析器。
<?xml version="1.0" encoding="UTF-8"?>
<!--日誌級別以及優先級排序: OFF > FATAL > ERROR > WARN > INFO > DEBUG > TRACE > ALL -->
<!--Configuration後面的status,這個用於設置log4j2自身內部的信息輸出,可以不設置,當設置成trace時,你會看到log4j2內部各種詳細輸出 -->
<!--monitorInterval:Log4j能夠自動檢測修改配置 文件和重新配置本身,設置間隔秒數 -->
<!--packages 爲自定義layout和appender的掃描目錄-->
<configuration status="WARN" monitorInterval="30" packages="com.test.rock.log">
<properties>
<Property name="PROJECT_NAME">my-project</Property>
<Property name="ELK_LOG_PATTERN">%m</Property>
</properties>
<CustomLevels>
<!--注意 : intLevel 值越小,級別越高 (log4j2 官方文檔)-->
<CustomLevel name="CUSTOMER" intLevel="1" />
</CustomLevels>
<!--先定義所有的appender -->
<appenders>
<!--這個輸出控制檯的配置 -->
<console name="Console" target="SYSTEM_OUT">
<ThresholdFilter level="info" onMatch="ACCEPT" onMismatch="DENY" />
<!--輸出日誌的格式 -->
<PatternLayout pattern="ROCK-%d{HH:mm:ss.SSS} %-5level - %msg%n" />
<!-- <JsonLayout compact="true" eventEol="true" />-->
</console>
<Socket name="Socket" host="192.168.1.71" port="12345">
<ThresholdFilter level="CUSTOMER" onMatch="ACCEPT" onMismatch="DENY" />
<!-- <JsonLayout compact="true" eventEol="true" />-->
<ElkJsonPatternLayout pattern="${ELK_LOG_PATTERN}" projectName="${PROJECT_NAME}"/>
</Socket>
<!-- 這個就是自定義的Appender -->
<ThriftAppender name="Thrift" host="127.0.0.1">
<ThresholdFilter level="CUSTOMER" onMatch="ACCEPT" onMismatch="DENY" />
<ElkJsonPatternLayout pattern="${ELK_LOG_PATTERN}" projectName="${PROJECT_NAME}"/>
</ThriftAppender>
</appenders>
<!--然後定義logger,只有定義了logger並引入的appender,appender纔會生效 -->
<loggers>
<root level="all">
<appender-ref ref="Console" />
<appender-ref ref="Socket"/>
<appender-ref ref="Thrift"/>
</root>
</loggers>
</configuration>
創建log4j自定義的json pattern
package com.test.rock.log;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.apache.commons.lang3.time.DateFormatUtils;
import org.apache.logging.log4j.core.Layout;
import org.apache.logging.log4j.core.LogEvent;
import org.apache.logging.log4j.core.config.Configuration;
import org.apache.logging.log4j.core.config.Node;
import org.apache.logging.log4j.core.config.plugins.*;
import org.apache.logging.log4j.core.layout.AbstractStringLayout;
import org.apache.logging.log4j.core.layout.PatternLayout;
import org.apache.logging.log4j.core.layout.PatternSelector;
import org.apache.logging.log4j.core.pattern.RegexReplacement;
import java.io.File;
import java.lang.management.ManagementFactory;
import java.lang.management.RuntimeMXBean;
import java.nio.charset.Charset;
@Plugin(name = "ElkJsonPatternLayout", category = Node.CATEGORY, elementType = Layout.ELEMENT_TYPE, printObject = true)
public class ElkJsonPatternLayout extends AbstractStringLayout {
/** 項目路徑 */
private static String PROJECT_PATH;
private PatternLayout patternLayout;
private String projectName;
static {
PROJECT_PATH = new File("").getAbsolutePath();
}
private ElkJsonPatternLayout(Configuration config, RegexReplacement replace, String eventPattern,
PatternSelector patternSelector, Charset charset, boolean alwaysWriteExceptions,
boolean noConsoleNoAnsi, String headerPattern, String footerPattern, String projectName) {
super(config, charset,
PatternLayout.createSerializer(config, replace, headerPattern, null, patternSelector, alwaysWriteExceptions,
noConsoleNoAnsi),
PatternLayout.createSerializer(config, replace, footerPattern, null, patternSelector, alwaysWriteExceptions,
noConsoleNoAnsi));
this.projectName = projectName;
this.patternLayout = PatternLayout.newBuilder()
.withPattern(eventPattern)
.withPatternSelector(patternSelector)
.withConfiguration(config)
.withRegexReplacement(replace)
.withCharset(charset)
.withAlwaysWriteExceptions(alwaysWriteExceptions)
.withNoConsoleNoAnsi(noConsoleNoAnsi)
.withHeader(headerPattern)
.withFooter(footerPattern)
.build();
}
@Override
public String toSerializable(LogEvent event) {
//在這裏處理日誌內容
String message = patternLayout.toSerializable(event);
String jsonStr = new JsonLoggerInfo(projectName, message, event.getLevel().name(), event.getLoggerName(), Thread.currentThread().getName(), event.getTimeMillis()).toString();
return jsonStr + "\n";
}
@PluginFactory
public static ElkJsonPatternLayout createLayout(
@PluginAttribute(value = "pattern", defaultString = PatternLayout.DEFAULT_CONVERSION_PATTERN) final String pattern,
@PluginElement("PatternSelector") final PatternSelector patternSelector,
@PluginConfiguration final Configuration config,
@PluginElement("Replace") final RegexReplacement replace,
// LOG4J2-783 use platform default by default, so do not specify defaultString for charset
@PluginAttribute(value = "charset") final Charset charset,
@PluginAttribute(value = "alwaysWriteExceptions", defaultBoolean = true) final boolean alwaysWriteExceptions,
@PluginAttribute(value = "noConsoleNoAnsi", defaultBoolean = false) final boolean noConsoleNoAnsi,
@PluginAttribute("header") final String headerPattern,
@PluginAttribute("footer") final String footerPattern,
@PluginAttribute("projectName") final String projectName) {
return new ElkJsonPatternLayout(config, replace, pattern, patternSelector, charset,
alwaysWriteExceptions, noConsoleNoAnsi, headerPattern, footerPattern, projectName);
}
public static String getProcessID() {
RuntimeMXBean runtime = ManagementFactory.getRuntimeMXBean();
String name = runtime.getName(); // format: "pid@hostname"
try {
return name.substring(0, name.indexOf('@'));
} catch (Exception e) {
return null;
}
}
public static String getThreadID(){
return "" + (int)(1+Math.random()*10000);
// return "" + Thread.currentThread().getId();
}
/**
* 輸出的日誌內容
*/
public static class JsonLoggerInfo{
/** 項目名 */
private String projectName;
/** 當前進程ID */
private String pid;
/** 當前線程ID */
private String tid;
/** 當前線程名 */
private String tidname;
/** 日誌信息 */
private String message;
/** 日誌級別 */
private String level;
/** 日誌分類 */
private String topic;
/** 日誌時間 */
private String time;
public JsonLoggerInfo(String projectName, String message, String level, String topic, String tidname, long timeMillis) {
this.projectName = projectName;
this.pid = getProcessID();
this.tid = getThreadID();
this.tidname = tidname;
this.message = message;
this.level = level;
this.topic = topic;
this.time = DateFormatUtils.format(timeMillis, "yyyy-MM-dd HH:mm:ss.SSS");
}
public String getProjectName() {
return projectName;
}
public String getPid() {
return pid;
}
public String getTid() {
return tid;
}
public String getTidname() {
return tidname;
}
public String getMessage() {
return message;
}
public String getLevel() {
return level;
}
public String getTopic() {
return topic;
}
public String getTime() {
return time;
}
@Override
public String toString() {
try {
return new ObjectMapper().writeValueAsString(this);
} catch (JsonProcessingException e) {
e.printStackTrace();
}
return null;
}
}
}
測試main方法
public static void main(String[] args) throws Exception{
log.info("abc abc abc");
for (int i = 0; i < 1000000; i++) {
log.log(Level.toLevel("CUSTOMER"), "hahahahaha");
try{
Thread.sleep(1*1000);
} catch(Exception e){
e.printStackTrace();
}
}
}
七、Kibana可視化圖形界面,簡單查看
成功發送日誌以後就是需要在Kibana上的可視化界面查看我們的日誌數據的統計結果。
創建index pattern
查看索引結果