轉載請表明出處 https://blog.csdn.net/Amor_Leo/article/details/87891442 謝謝
Spring Cloud Sleuth概述
微服務架構是一個分佈式架構,它按業務劃分服務單元,一個分佈式系統往往有很多個服務單元。由於服務單元數量衆多,業務的複雜性,如果出現了錯誤和異常,很難去定位。主要體現在,一個請求可能需要調用很多個服務,而內部服務的調用複雜性,決定了問題難以定位。所以微服務架構中,必須實現分佈式鏈路追蹤,去跟進一個請求到底有哪些服務參與,參與的順序又是怎樣的,從而達到每個請求的步驟清晰可見,出了問題,很快定位。
Spring Cloud Sleuth 主要功能就是在分佈式系統中提供服務追蹤解決方案。
- Span(跨度) : Span是基本的工作單元。Span包括一個64位的唯一ID,一個64位trace碼,描述信息,時間戳事件,key-value 註解(tags),span處理者的ID(通常爲IP)。最開始的初始Span稱爲根span,此span中span id和 trace id值相同。
- Trance(跟蹤) : 包含一系列的span,它們組成了一個樹型結構,用一個64位的唯一ID標識。
- Annotation(標註) : 用於及時記錄存在的事件。常用的Annotation如下:
- cs - Client Sent:客戶端發送一個請求,表示span的開始
- sr - Server Received:服務端接收請求並開始處理它。(sr-cs)等於網絡的延遲
- ss - Server Sent:服務端處理請求完成,開始返回結束給服務端。(ss-sr)表示服務端處理請求的時間
- cr - Client Received:客戶端完成接受返回結果,此時span結束。(cr-sr)表示客戶端接收服務端數據的時間
Spring Cloud Sleuth搭建
基本
- pom
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>
- yml
server:
port: 8000
spring:
application:
name: provider-server
logging:
level:
org.springframework.cloud.sleuth: DEBUG
與ELK整合
- pom
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>4.6</version>
</dependency>
- yml
application.yml
server:
port: 8000
logging:
level:
root: INFO
org.springframework.cloud.sleuth: DEBUG
bootstrap.yml
spring:
application:
name: provider-server
# 注意:本例中的spring.application.name只能放在bootstrap.*文件中,不能放在application.*文件中,因爲我們使用了自定義的logback-spring.xml。
# 如果放在application.*文件中,自定義的logback文件將無法正確讀取屬性
- logback-spring.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<include resource="org/springframework/boot/logging/logback/defaults.xml" />
<springProperty scope="context" name="springAppName" source="spring.application.name" />
<!-- Example for logging into the build folder of your project -->
<property name="LOG_FILE" value="${BUILD_FOLDER:-build}/${springAppName}" />
<property name="CONSOLE_LOG_PATTERN"
value="%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr([${springAppName:-},%X{X-B3-TraceId:-},%X{X-B3-SpanId:-},%X{X-B3-ParentSpanId:-},%X{X-Span-Export:-}]){yellow} %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}" />
<!-- Appender to log to console -->
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<!-- Minimum logging level to be presented in the console logs -->
<level>DEBUG</level>
</filter>
<encoder>
<pattern>${CONSOLE_LOG_PATTERN}</pattern>
<charset>utf8</charset>
</encoder>
</appender>
<!-- Appender to log to file -->
<appender name="flatfile" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_FILE}</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${LOG_FILE}.%d{yyyy-MM-dd}.gz</fileNamePattern>
<maxHistory>7</maxHistory>
</rollingPolicy>
<encoder>
<pattern>${CONSOLE_LOG_PATTERN}</pattern>
<charset>utf8</charset>
</encoder>
</appender>
<!-- Appender to log to file in a JSON format -->
<appender name="logstash" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_FILE}.json</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${LOG_FILE}.json.%d{yyyy-MM-dd}.gz</fileNamePattern>
<maxHistory>7</maxHistory>
</rollingPolicy>
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp>
<timeZone>UTC</timeZone>
</timestamp>
<pattern>
<pattern>
{
"severity": "%level",
"service": "${springAppName:-}",
"trace": "%X{X-B3-TraceId:-}",
"span": "%X{X-B3-SpanId:-}",
"parent": "%X{X-B3-ParentSpanId:-}",
"exportable": "%X{X-Span-Export:-}",
"pid": "${PID:-}",
"thread": "%thread",
"class": "%logger{40}",
"rest": "%message"
}
</pattern>
</pattern>
</providers>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="console" />
<appender-ref ref="logstash" />
<!--<appender-ref ref="flatfile"/> -->
</root>
</configuration>
其中logstash.conf文件:
與Zipkin整合
- zipkin server
- pom
<dependency> <groupId>io.zipkin.java</groupId> <artifactId>zipkin-autoconfigure-ui</artifactId> </dependency> <dependency> <groupId>io.zipkin.java</groupId> <artifactId>zipkin-server</artifactId> </dependency>
- yml
server: port: 9411 spring: application: name: zipkin-server
- Application類
@EnableZipkinServer //默認採用HTTP通信方式啓動ZipkinServer @SpringBootApplication public class ZipkinServerApplication { public static void main(String[] args) { SpringApplication.run(ZipkinServerApplication.class, args); }
- web Client
- pom
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-sleuth</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-sleuth-zipkin</artifactId> </dependency>
- yml
server: port: 8000 spring: application: name: provider-server zipkin: enabled: true base-url: http://localhost:9411 #指定zipkin-server的服務地址,如果加了Eureka,可以在URL中傳遞Zipkin的服務ID sleuth: sampler: percentage: 1.0 #設置採樣率,默認0.1,爲了測試設置100%採集
- pom
使用消息中間件收集消息
- zipkin server
- pom
<dependency> <groupId>io.zipkin.java</groupId> <artifactId>zipkin-autoconfigure-ui</artifactId> </dependency> <dependency> <groupId>io.zipkin.java</groupId> <artifactId>zipkin-server</artifactId> </dependency> <!-- 使用消息的方式收集數據(使用rabbitmq) --> <dependency> <groupId>io.zipkin.java</groupId> <artifactId>zipkin-autoconfigure-collector-rabbitmq</artifactId> <version>2.3.1</version> </dependency>
- yml
server: port: 9411 spring: application: name: zipkin-server zipkin: collector: rabbitmq: addresses: 192.168.0.111:5672 password: admin username: admin queue: zipkin
- Application類
@EnableZipkinServer @SpringBootApplication public class ZipkinServerApplication { public static void main(String[] args) { SpringApplication.run(ZipkinServerApplication.class, args); }
- pom
- Web Client
- pom
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-zipkin</artifactId> </dependency> <dependency> <groupId>org.springframework.amqp</groupId> <artifactId>spring-rabbit</artifactId> </dependency>
- yml
server: port: 8000 spring: application: name: provider-server sleuth: sampler: percentage: 1.0 rabbitmq: host: 192.168.0.111 port: 5672 username: admin password: admin zipkin: rabbitmq: queue: zipkin
- pom
數據持久化(ES)
- zipkin server
- pom
<dependency> <groupId>io.zipkin.java</groupId> <artifactId>zipkin-autoconfigure-ui</artifactId> </dependency> <dependency> <groupId>io.zipkin.java</groupId> <artifactId>zipkin-server</artifactId> </dependency> <dependency> <groupId>io.zipkin.java</groupId> <artifactId>zipkin-autoconfigure-collector-rabbitmq</artifactId> <version>2.3.1</version> </dependency> <!-- 支持Elasticsearch 2.x - 6.x --> <dependency> <groupId>io.zipkin.java</groupId> <artifactId>zipkin-autoconfigure-storage-elasticsearch-http</artifactId> <version>2.3.1</version> </dependency>
- yml
server: port: 9411 zipkin: collector: rabbitmq: addresses: 192.168.0.111:5672 password: admin username: admin queue: zipkin storage: type: elasticsearch #表示zipkin數據存儲方式是elasticsearch elasticsearch: cluster: elasticsearch hosts: http://192.168.0.111:9200 index: zipkin index-shards: 5 index-replicas: 1
- Application類
@EnableZipkinServer @SpringBootApplication public class ZipkinServerApplication { public static void main(String[] args) { SpringApplication.run(ZipkinServerApplication.class, args); }
- pom