10.2 sleuth和zipkin

spring cloud sleuth 分布式跟踪的解决方案,

span 跨度,基本工作单元。

包含:

  1. 64位的唯一标识(id),

  2. 描述

  3. 时间戳事件

  4. 键值对注解

  5. spanId

  6. span父Id

初始化的时候被称为root span, id 和 trace Id 相同。

trace跟踪

共享root span ,span组成的树状结构

64位的唯一标识(id),该trace中所有的span都共享该trace的Id

annotation(标注)

用来记录事件的存在,定义请求的开始和结束

  1. cs client sent 客户端发送(一个请求),该标注描述了span的开始
  2. sr server received 服务端接收 ,服务端获得请求并处理。sr-cs=网络延迟
  3. ss server sent 服务端发送。表明完成请求处理(当响应发回客户端时),ss-sr=服务器端处理请求所需的时间
  4. cr client received 客户端接收,span结束的标识。客户端成功接收到服务器端的响应。cr - cs= 客户端发送请求到服务器响应的所需时间。
    <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-starter-sleuth</artifactId>
    </dependency>
logging:
  level:
    root: INFO
    org.springframework.cloud.sleuth: DEBUG
    # org.springframework.web.servlet.DispatcherServlet: DEBUG

sleuth与elk配合

    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-actuator</artifactId>
    </dependency>
    <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-starter-sleuth</artifactId>
    </dependency>
    <dependency>
      <groupId>net.logstash.logback</groupId>
      <artifactId>logstash-logback-encoder</artifactId>
      <version>4.6</version>
    </dependency>

resources目录创建 logback-spring.xml

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <include resource="org/springframework/boot/logging/logback/defaults.xml" /><springProperty scope="context" name="springAppName" source="spring.application.name" />
  <!-- Example for logging into the build folder of your project -->
  <property name="LOG_FILE" value="${BUILD_FOLDER:-build}/${springAppName}" /><property name="CONSOLE_LOG_PATTERN"
    value="%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr([${springAppName:-},%X{X-B3-TraceId:-},%X{X-B3-SpanId:-},%X{X-B3-ParentSpanId:-},%X{X-Span-Export:-}]){yellow} %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}" />

  <!-- Appender to log to console -->
  <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
    <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
      <!-- Minimum logging level to be presented in the console logs -->
      <level>DEBUG</level>
    </filter>
    <encoder>
      <pattern>${CONSOLE_LOG_PATTERN}</pattern>
      <charset>utf8</charset>
    </encoder>
  </appender>

  <!-- Appender to log to file -->
  <appender name="flatfile" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <file>${LOG_FILE}</file>
    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
      <fileNamePattern>${LOG_FILE}.%d{yyyy-MM-dd}.gz</fileNamePattern>
      <maxHistory>7</maxHistory>
    </rollingPolicy>
    <encoder>
      <pattern>${CONSOLE_LOG_PATTERN}</pattern>
      <charset>utf8</charset>
    </encoder>
  </appender><!-- Appender to log to file in a JSON format -->
  <appender name="logstash" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <file>${LOG_FILE}.json</file>
    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
      <fileNamePattern>${LOG_FILE}.json.%d{yyyy-MM-dd}.gz</fileNamePattern>
      <maxHistory>7</maxHistory>
    </rollingPolicy>
    <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
      <providers>
        <timestamp>
          <timeZone>UTC</timeZone>
        </timestamp>
        <pattern>
          <pattern>
            {
              "severity": "%level",
              "service": "${springAppName:-}",
              "trace": "%X{X-B3-TraceId:-}",
              "span": "%X{X-B3-SpanId:-}",
              "parent": "%X{X-B3-ParentSpanId:-}",
              "exportable": "%X{X-Span-Export:-}",
              "pid": "${PID:-}",
              "thread": "%thread",
              "class": "%logger{40}",
              "rest": "%message"
            }
          </pattern>
        </pattern>
      </providers>
    </encoder>
  </appender><root level="INFO">
    <appender-ref ref="console" />
    <appender-ref ref="logstash" />
    <!--<appender-ref ref="flatfile"/> -->
  </root>
</configuration>

编写bootstrap.yml

spring:
  application:
    name: microservice-provider-user

# 注意:本例中的spring.application.name只能放在bootstrap.*文件中,不能放在application.*文件中,因为我们使用了自定义的logback-spring.xml。
# 如果放在application.*文件中,自定义的logback文件将无法正确读取属性。

sleuth 与 zipkin

zipkin是 twitter开源的分布式跟踪系统,基于Dapper的论文设计而来。

收集系统的 时序数据,追踪微服务架构的 系统延时

    <dependency>
      <groupId>io.zipkin.java</groupId>
      <artifactId>zipkin-autoconfigure-ui</artifactId>
    </dependency>
    <dependency>
      <groupId>io.zipkin.java</groupId>
      <artifactId>zipkin-server</artifactId>
    </dependency>

main方法

@SpringBootApplication
@EnableZipkinServer
public class ZipkinServerApplication {
  public static void main(String[] args) {
    SpringApplication.run(ZipkinServerApplication.class, args);
  }
}

http://localhost:9411/zipkin/

service name 就是 spring: application: name: microservice-provider-user

start time 和 end time 起始时间和结束时间

Duration 持续 时间, 从 span创建到关闭所经历的时间

annotations query 自定义查询

微服务整合 Zipkin

    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-actuator</artifactId>
    </dependency>
    <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-starter-sleuth</artifactId>
    </dependency>
    <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-sleuth-zipkin</artifactId>
    </dependency>
spring:
  zipkin:
    base-url: http://localhost:9411
  sleuth:
    sampler:
      percentage: 1.0
  • percentage 采样的请求的百分比,默认是0.1 ,即 10%

多种采集器

  public Sampler defaultSampler(){
    /*return new NeverSampler();
    return new PercentageBasedSampler();*/
    return new AlwaysSampler();
  }

默认采集是10%,会忽略大量span

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章