10.2 sleuth和zipkin

spring cloud sleuth 分佈式跟蹤的解決方案,

span 跨度,基本工作單元。

包含:

  1. 64位的唯一標識(id),

  2. 描述

  3. 時間戳事件

  4. 鍵值對註解

  5. spanId

  6. span父Id

初始化的時候被稱爲root span, id 和 trace Id 相同。

trace跟蹤

共享root span ,span組成的樹狀結構

64位的唯一標識(id),該trace中所有的span都共享該trace的Id

annotation(標註)

用來記錄事件的存在,定義請求的開始和結束

  1. cs client sent 客戶端發送(一個請求),該標註描述了span的開始
  2. sr server received 服務端接收 ,服務端獲得請求並處理。sr-cs=網絡延遲
  3. ss server sent 服務端發送。表明完成請求處理(當響應發回客戶端時),ss-sr=服務器端處理請求所需的時間
  4. cr client received 客戶端接收,span結束的標識。客戶端成功接收到服務器端的響應。cr - cs= 客戶端發送請求到服務器響應的所需時間。
    <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-starter-sleuth</artifactId>
    </dependency>
logging:
  level:
    root: INFO
    org.springframework.cloud.sleuth: DEBUG
    # org.springframework.web.servlet.DispatcherServlet: DEBUG

sleuth與elk配合

    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-actuator</artifactId>
    </dependency>
    <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-starter-sleuth</artifactId>
    </dependency>
    <dependency>
      <groupId>net.logstash.logback</groupId>
      <artifactId>logstash-logback-encoder</artifactId>
      <version>4.6</version>
    </dependency>

resources目錄創建 logback-spring.xml

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <include resource="org/springframework/boot/logging/logback/defaults.xml" /><springProperty scope="context" name="springAppName" source="spring.application.name" />
  <!-- Example for logging into the build folder of your project -->
  <property name="LOG_FILE" value="${BUILD_FOLDER:-build}/${springAppName}" /><property name="CONSOLE_LOG_PATTERN"
    value="%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr([${springAppName:-},%X{X-B3-TraceId:-},%X{X-B3-SpanId:-},%X{X-B3-ParentSpanId:-},%X{X-Span-Export:-}]){yellow} %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}" />

  <!-- Appender to log to console -->
  <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
    <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
      <!-- Minimum logging level to be presented in the console logs -->
      <level>DEBUG</level>
    </filter>
    <encoder>
      <pattern>${CONSOLE_LOG_PATTERN}</pattern>
      <charset>utf8</charset>
    </encoder>
  </appender>

  <!-- Appender to log to file -->
  <appender name="flatfile" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <file>${LOG_FILE}</file>
    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
      <fileNamePattern>${LOG_FILE}.%d{yyyy-MM-dd}.gz</fileNamePattern>
      <maxHistory>7</maxHistory>
    </rollingPolicy>
    <encoder>
      <pattern>${CONSOLE_LOG_PATTERN}</pattern>
      <charset>utf8</charset>
    </encoder>
  </appender><!-- Appender to log to file in a JSON format -->
  <appender name="logstash" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <file>${LOG_FILE}.json</file>
    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
      <fileNamePattern>${LOG_FILE}.json.%d{yyyy-MM-dd}.gz</fileNamePattern>
      <maxHistory>7</maxHistory>
    </rollingPolicy>
    <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
      <providers>
        <timestamp>
          <timeZone>UTC</timeZone>
        </timestamp>
        <pattern>
          <pattern>
            {
              "severity": "%level",
              "service": "${springAppName:-}",
              "trace": "%X{X-B3-TraceId:-}",
              "span": "%X{X-B3-SpanId:-}",
              "parent": "%X{X-B3-ParentSpanId:-}",
              "exportable": "%X{X-Span-Export:-}",
              "pid": "${PID:-}",
              "thread": "%thread",
              "class": "%logger{40}",
              "rest": "%message"
            }
          </pattern>
        </pattern>
      </providers>
    </encoder>
  </appender><root level="INFO">
    <appender-ref ref="console" />
    <appender-ref ref="logstash" />
    <!--<appender-ref ref="flatfile"/> -->
  </root>
</configuration>

編寫bootstrap.yml

spring:
  application:
    name: microservice-provider-user

# 注意:本例中的spring.application.name只能放在bootstrap.*文件中,不能放在application.*文件中,因爲我們使用了自定義的logback-spring.xml。
# 如果放在application.*文件中,自定義的logback文件將無法正確讀取屬性。

sleuth 與 zipkin

zipkin是 twitter開源的分佈式跟蹤系統,基於Dapper的論文設計而來。

收集系統的 時序數據,追蹤微服務架構的 系統延時

    <dependency>
      <groupId>io.zipkin.java</groupId>
      <artifactId>zipkin-autoconfigure-ui</artifactId>
    </dependency>
    <dependency>
      <groupId>io.zipkin.java</groupId>
      <artifactId>zipkin-server</artifactId>
    </dependency>

main方法

@SpringBootApplication
@EnableZipkinServer
public class ZipkinServerApplication {
  public static void main(String[] args) {
    SpringApplication.run(ZipkinServerApplication.class, args);
  }
}

http://localhost:9411/zipkin/

service name 就是 spring: application: name: microservice-provider-user

start time 和 end time 起始時間和結束時間

Duration 持續 時間, 從 span創建到關閉所經歷的時間

annotations query 自定義查詢

微服務整合 Zipkin

    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-actuator</artifactId>
    </dependency>
    <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-starter-sleuth</artifactId>
    </dependency>
    <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-sleuth-zipkin</artifactId>
    </dependency>
spring:
  zipkin:
    base-url: http://localhost:9411
  sleuth:
    sampler:
      percentage: 1.0
  • percentage 採樣的請求的百分比,默認是0.1 ,即 10%

多種採集器

  public Sampler defaultSampler(){
    /*return new NeverSampler();
    return new PercentageBasedSampler();*/
    return new AlwaysSampler();
  }

默認採集是10%,會忽略大量span

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章