Springboot整合Kafka、Logstash實現日誌異步

在這裏插入圖片描述

前言

企業開發中,一個應用會有多個測試環境,於是會有一個專門的服務器做日誌收集,那就需要保存日誌和應用隔離,這裏就牽涉到異步存寫日誌的問題,異步消息隊列選取kafka,高性能,日誌消息消費我們使用logstash。這裏也可以都使用elk(土豪請繞過)。

Kafka的搭建

習慣使用docker-compose啓動,腳本如下:

version: '2'
services:
  zookeeper:
    image: wurstmeister/zookeeper   ## 鏡像
    ports:
      - "2181:2181"                 ## 對外暴露的端口號
  kafka:
    image: wurstmeister/kafka       ## 鏡像
    volumes:
        - /etc/localtime:/etc/localtime ## 掛載位置(kafka鏡像和宿主機器之間時間保持一直)
    ports:
      - "9092:9092"
    environment:
      KAFKA_ADVERTISED_HOST_NAME:    ## 修改:宿主機IP
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181       ## kafka運行是基於zookeeper的
  kafka-manager:
    image: sheepkiller/kafka-manager                ## 鏡像:開源的web管理kafka集羣的界面
    environment:
        ZK_HOSTS:                    ## 修改:宿主機IP
    ports:
      - "9000:9000"

docker-compose up -d 後臺方式啓動
docker-compose stop 關閉
docker-compose ps 查看有哪些應用
docker-compose build 構建鏡像

Logstash的搭建

同樣也是用docker-compose啓動,腳本如下:

docker-compose.yml->

version: "3"
services:
    logstash-1:
        image: logstash:7.0.0
        container_name: logstash
        volumes:
            - ${LOGSTASH_CONFIG_DIR}/logstash.conf:/usr/share/logstash/pipeline/logstash.conf:rw
            - ${LOGSTASH_CONFIG_DIR}/logstash.yml:/usr/share/logstash/config/logstash.yml:rw
        network_mode: "host"

logstash.confi

input {
  kafka {
    bootstrap_servers => "IP:9092"
    topics => ["abklog_topic"]
    group_id => "abklog_topic"
  }
}


output {
  file {
    file_mode => 0777
    dir_mode => 0777
    path => "/path/to/%{+yyyy-MM-dd-HH}/%{host}.log"
  }
  stdout {
    codec => rubydebug
  }
}

logstash.yml


# set now host ip to http.host
http.host: 192.168.56.121
# set the es-tribe-node host. let logstash monitor the es.
#xpack.monitoring.elasticsearch.hosts:
#- http://10.2.114.110:9204
# enable or disable the logstash monitoring the es.
#xpack.monitoring.enabled: true

Java日誌配置

在你改好的springboot工程中整好,加入這個依賴。

 		<!-- https://mvnrepository.com/artifact/com.github.danielwegener/logback-kafka-appender -->
        <dependency>
            <groupId>com.github.danielwegener</groupId>
            <artifactId>logback-kafka-appender</artifactId>
            <version>0.2.0-RC2</version>
        </dependency>

logback.xml配置

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
	<!-- springProfile用於指定當前激活的環境,如果spring.profile.active的值是哪個,就會激活對應節點下的配置 -->
	<springProfile name="dev">
		<!-- configuration to be enabled when the "staging" profile is active -->
		<springProperty scope="context" name="module" source="spring.application.name"
						defaultValue="undefinded"/>
		<!-- 該節點會讀取Environment中配置的值,在這裏我們讀取application.yml中的值 -->
		<springProperty scope="context" name="bootstrapServers" source="spring.kafka.bootstrap-servers"
						defaultValue="IP:9092"/>
		<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
			<!-- encoders are assigned the type
                 ch.qos.logback.classic.encoder.PatternLayoutEncoder by default -->
			<encoder>
				<pattern>%boldYellow(${module}) | %d | %highlight(%-5level)| %cyan(%logger{15}) - %msg %n</pattern>
			</encoder>
		</appender>
		<!-- kafka的appender配置 -->
		<appender name="kafka" class="com.github.danielwegener.logback.kafka.KafkaAppender">
			<encoder>
				<pattern>${module} | %d | %-5level| %logger{15} - %msg</pattern>
			</encoder>
			<topic>abklog_topic</topic>
			<keyingStrategy class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy"/>
			<deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy"/>

			<!-- Optional parameter to use a fixed partition -->
			<!-- <partition>0</partition> -->

			<!-- Optional parameter to include log timestamps into the kafka message -->
			<!-- <appendTimestamp>true</appendTimestamp> -->

			<!-- each <producerConfig> translates to regular kafka-client config (format: key=value) -->
			<!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs -->
			<!-- bootstrap.servers is the only mandatory producerConfig -->
			<producerConfig>bootstrap.servers=${bootstrapServers}</producerConfig>

			<!-- 如果kafka不可用則輸出到控制檯 -->
			<appender-ref ref="STDOUT"/>

		</appender>
		<!-- 指定項目中的logger -->
		<!--<logger name="org.springframework.test" level="INFO" >
			<appender-ref ref="kafka" />
		</logger>-->
		<logger name="com.fast.cloud.fastelk.controller" level="INFO" >
			<appender-ref ref="kafka" />
		</logger>
		<root level="info">
			<appender-ref ref="STDOUT" />
		</root>
	</springProfile>
</configuration>

記得將kafka的ip換成你真實的ip

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章