ELK 配置及設計
軟件及版本
CentOS 7
Oracle JDK 8
Kibana 4.5.2
Elaticsearch 2.3.4
logstash 2.3.4
filebeat 1.2.3
查看version command: filebeat --version
logstash + ES + Kibana 安裝至同一臺虛擬機。 IP 地址:192.168.1.50
需要解決問題
彙集所有的微服務產生的日誌,並根據不同的環境將日誌添加進不同的索引。
遺留需要解決的問題:
micro-service - > logstash 的安全控制(可以嘗試採用SSL); logstash -> ES 的安全控制 (安裝在同一臺機器上,安全問題好解決); kibana 訪問權限的控制(尚待研究)
目前需要爲ofbiz的每一個環境單獨安裝filebeat,未來是否可以統一使用類似spring boot 服務的做法?
現有微服務日誌分析
現有的微服務分兩種, 一種是ofbiz框架的webapp, 一種是spring boot的web app。 spring boot 默認採用logback 打印日誌。 ofbiz可配置採用log4j 輸出日誌
系統架構圖
架構說明:
針對spring boot的webapp,採用logstash-logback-encoder (參考:https://github.com/logstash/logstash-logback-encoder)直接將日誌通過LogstashTcpSocketAppender 發送至logstash。無需任何java代碼的修改。
針對ofbiz的webapp:
- 修改log4j2.xml 配置使用 JSONLayout 將日誌轉換成json的格式進行輸出。 (單行輸出,即異常也輸出到一行)
- 安裝filebeat監聽log文件,將文件通過日誌通過tcp輸出至logstash
logstash 在彙集log後, 通過logenv 字段來區分不同的環境將json格式的日誌push到ES中不同的index中
INT系統部署圖
軟件的安裝
軟件統一採用yum的安裝方式,各個repo文件請查考
logstash.repo
name=Logstash repository for 2.3.x packages
baseurl=https://packages.elastic.co/logstash/2.3/centos
gpgcheck=1
gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
elasticsearch.repo
[elasticsearch-2.x]
name=Elasticsearch repository for 2.x packages
baseurl=https://packages.elastic.co/elasticsearch/2.x/centos
gpgcheck=1
gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
kibana.repo
[kibana-4.5]
name=Kibana repository for 4.5.x packages
baseurl=http://packages.elastic.co/kibana/4.5/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
filebeat.repo
[beats]
name=Elastic Beats Repository
baseurl=https://packages.elastic.co/beats/yum/el/$basearch
enabled=1
gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
gpgcheck=1
安裝命令:
sudo yum install logstash
隨系統自動啓動:
sudo chkconfig --add logstash
啓動關閉命令:
sudo service logstash start/stop/restart
Spring Boot 架構的微服務的日誌配置及說明
spring boot的日誌需要引入logstash的支持
maven中引入第三方依賴
logback-spring.xml 中表達式依賴janino
<dependency>
<groupId>org.codehaus.janino</groupId>
<artifactId>janino</artifactId>
<version>2.7.8</version>
</dependency>
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>4.7</version>
</dependency>
logback-spring.xml 的配置
這裏名稱必須是logback-spring.xml 爲了可以讀取到spring的 配置, 同時保證loback-spring.xml的通用性
Include support for a new <springProperty> element which can be used in
`logback-spring.xml` files to add properties from the Spring
Environment. 來至網頁:https://github.com/spring-projects/spring-boot/commit/055ace37f006120b0006956b03c7f358d5f3729f
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<springProperty name="destination" source="logstash.destination"/>
<springProperty name="logstashEnabled" source="logstash.enable"/>
<springProperty name="appName" source="micro-service.id"/>
<springProperty name="env" source="spring.profiles.active"/>
<include resource="org/springframework/boot/logging/logback/defaults.xml" />
<include resource="org/springframework/boot/logging/logback/console-appender.xml" />
<if condition='property("logstashEnabled").equalsIgnoreCase("true")'>
<then>
<appender name="stash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>${destination}</destination>
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<customFields>{"logenv":"${env}","appname":"${appName}"}</customFields>
</encoder>
</appender>
</then>
</if>
<appender name="FILE"
class="ch.qos.logback.core.rolling.RollingFileAppender">
<encoder>
<pattern>${FILE_LOG_PATTERN}</pattern>
</encoder>
<file>${LOG_FILE}</file>
<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
<fileNamePattern>${LOG_FILE}.%i</fileNamePattern>
</rollingPolicy>
<triggeringPolicy
class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<MaxFileSize>10MB</MaxFileSize>
</triggeringPolicy>
<if condition='property("logstashEnabled").equalsIgnoreCase("true")'>
<then>
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<customFields>{"logenv":"${env}","appname":"${appName}"}</customFields>
</encoder>
</then>
</if>
</appender>
<root level="INFO">
<if condition='property("env").equalsIgnoreCase("dev")'>
<then>
<appender-ref ref="CONSOLE" />
</then>
</if>
<appender-ref ref="FILE" />
<if condition='property("logstashEnabled").equalsIgnoreCase("true")'>
<then>
<appender-ref ref="stash" />
</then>
</if>
</root>
</configuration>
增加了兩個custom的字段,配置字段的值均來自spring的配置,保證logback-spring.xml 在各個spring boot的微服務的通用性
appname: 用來區分微服務, 比如: sourcing,sample
logenv: 用來區分不同的環境,比如: int, preprd
application.properties 針對logstash的配置
spring.profiles.active=dev
micro-service.id=apigateway
logstash.enable = true
logstash.destination=192.168.1.50:4560
Ofbiz 架構的微服務的日誌配置及說明
log4j2.xml 配置 使用json layout
<?xml version="1.0" encoding="UTF-8"?>
<Configuration monitorInterval="60">
<!--
Default configuration for logging; for customizations refer to http://logging.apache.org/log4j/2.x/manual/configuration.html.
With this configuration the following behavior is defined:
* all log messages of severity "warning" or greater, generated by external jars, are logged in the ofbiz.log file and in the console
* all log messages of any severity, generated by OFBiz, are logged in the ofbiz.log file and in the console
* all log messages of severity "error" or greater are also logged in the error.log file
When the ofbiz.log file reaches 1MB in size a new file is created and a date/sequence suffix is added; up to 10 files are kept.
When the error.log file reaches 1MB in size a new file is created and a date/sequence suffix is added; up to 3 files are kept.
The settings in this configuration file can be changed without restarting the instance: every 60 seconds the file is checked for modifications.
-->
<Appenders>
<Console name="stdout" target="SYSTEM_OUT">
<PatternLayout pattern="%date{DEFAULT} |%-20.20thread |%-30.30logger{1}|%level{length=1}| %message%n"/>
</Console>
<RollingFile name="ofbiz" fileName="runtime/logs/ofbiz.log"
filePattern="runtime/logs/ofbiz-%d{yyyy-MM-dd}-%i.log">
<!-- <PatternLayout pattern="%date{DEFAULT} |%-20.20thread |%-30.30logger{1}|%level{length=1}| %message%n"/> -->
<JSONLayout complete="true" compact="true" eventEol="true"/>
<Policies>
<TimeBasedTriggeringPolicy/>
<SizeBasedTriggeringPolicy size="10 MB"/>
</Policies>
<DefaultRolloverStrategy max="10"/>
</RollingFile>
<RollingFile name="error" fileName="runtime/logs/error.log"
filePattern="runtime/logs/error-%d{yyyy-MM-dd}-%i.log">
<ThresholdFilter level="error" onMatch="ACCEPT" onMismatch="DENY"/>
<PatternLayout pattern="%date{DEFAULT} |%-20.20thread |%-30.30logger{1}|%level{length=1}| %message%n"/>
<Policies>
<TimeBasedTriggeringPolicy/>
<SizeBasedTriggeringPolicy size="10 MB"/>
</Policies>
<DefaultRolloverStrategy max="3"/>
</RollingFile>
<Async name="async">
<AppenderRef ref="ofbiz"/>
<AppenderRef ref="stdout"/>
<AppenderRef ref="error"/>
</Async>
</Appenders>
<Loggers>
<logger name="org.ofbiz.base.converter.Converters" level="warn"/>
<logger name="com.okchem.b2b.datasync.jms" level="warn"/>
<logger name="org.apache" level="warn"/>
<logger name="freemarker" level="warn"/>
<Root level="all">
<AppenderRef ref="async"/>
</Root>
</Loggers>
</Configuration>
Ofbiz 架構的微服務的filebeat 的配置
filebeat 可以將配置直接輸出到ES, 也可以輸出到logstash進行進一步的解析處理。 這裏輸出到logstash,logstash統一進行處理分發
################### Filebeat Configuration Example #########################
############################# Filebeat ######################################
filebeat:
# List of prospectors to fetch data.
prospectors:
.......................
paths:
- /home/okchem/storage92g/ofbiz/runtime/logs/ofbiz.log
fields:
appname: ofibz
logenv: int
...................
output:
### Logstash as output
logstash:
# The Logstash hosts
hosts: ["192.168.1.50:5044"]
Logstash的配置
logstash是有各個插件來組合完成解析及輸出到ES的功能的, 這裏並沒有採用常用的gork插件。(據說效率不高,另外配置比較麻煩)
logstash的配置文件logstash.conf
input {
tcp {
port => 4560
codec => json_lines
}
beats {
port => 5044
codec => "json"
}
}
output {
elasticsearch {
hosts => "localhost:9200"
index => "%{logenv}-logstash-%{+YYYY.MM.dd}"
}
stdout{
codec => rubydebug
}
}
input 配置tcp插件 端口4560去接收 spring boot的微服務輸出的日誌;beats插件 配置5044 端口接收來自filebeat的日誌,並解析字符串爲json。
使用elasticsearch插件將日誌推送至索引。 這裏 logenv 是從前面配置的日誌中custom field中獲取,然後將同一環境的log 推送到同一個索引。 索引使用了logstash-%{+YYYY.MM.dd} 通配符,在ES會每天建一個索引。
(新版ES 推薦做法)。 如果去/var/lib/elasticsearch/elasticsearch/nodes/0/indices/ 可以查看到目錄下:
dev-logstash-2016.08.03 dev-logstash-2016.08.08 envdev-logstash-2016.08.04 envdev-logstash-2016.08.07 logstash-2016.07.26 logstash-2016.07.29 logstash-2016.08.01
dev-logstash-2016.08.04 envdev-logstash-2016.08.02 envdev-logstash-2016.08.05 envdev-logstash-2016.08.08 logstash-2016.07.27 logstash-2016.07.30 logstash-2016.08.02
dev-logstash-2016.08.05 envdev-logstash-2016.08.03 envdev-logstash-2016.08.06 %{logenv}-logstash-2016.08.02 logstash-2016.07.28 logstash-2016.07.31
ES 和 Kibana的配置
使用默認配置即可
通過Kibana訪問日誌
打開連接http://192.168.1.50:5601/,點擊discover 菜單,選擇int-logstash-*(通配符可以保證瀏覽所有的int 環境的日誌)。 默認只會顯示當年15分鐘內的日誌,通過右上角可以快速通過時間過濾日誌