centos7 Spring mvc配置kafka教程+兩種kafka配置方式+spring xml配置+java代碼配置+kafka sasl安全認證(配置賬號密碼生產消費)

kafka安裝,以及開啓認證請看上一篇文章!

centos7 kafka安全認證(配置賬號密碼生產消費)+systemctl開機啓動

Spring 版本:4.2.5.RELEASE
kafka版本:kafka_2.12-2.2.0
(由於Spring版本問題,無法使用kafka最新版,2.3需要spring 5,經測試2.2.0可以正常使用)

經過幾天的配置測試,kafka接入spring有兩種方式。
1.完全用java代碼接入,優點:方便,靈活,創建監聽消費方便自由。可以用註解方式監聽消費。
2.完全用配置xml方式接入,缺點:創建消費者不靈活,需要修改xml配置。不能用註解

spring配置:

pom.xml添加依賴:

 <dependency>
            <groupId>org.springframework.kafka</groupId>
            <artifactId>spring-kafka</artifactId>
            <version>2.2.0.RELEASE</version>
        </dependency>
我們先創建一個公共的配置文件kafka.properties
################# kafka 公共配置##################
# brokers集羣 
kafka.bootstrap.servers = ip:9092

#sasl安全認證配置 
#此文配置都默認添加賬號驗證配置,如果kafka服務器沒有開啓sasl沒有開啓測無法連接服務器,已配置可以參考,沒有配置一定要取消
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin";
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN

################# kafka producer 生產者配置##################
kafka.producer.acks = all
#發送失敗重試次數
kafka.producer.retries = 3
kafka.producer.linger.ms =  10
# 33554432 即32MB的批處理緩衝區
kafka.producer.buffer.memory = 40960
#批處理條數:當多個記錄被髮送到同一個分區時,生產者會嘗試將記錄合併到更少的請求中。這有助於客戶端和服務器的性能
kafka.producer.batch.size = 4096
#默認topci
kafka.producer.defaultTopic = topone
kafka.producer.key.serializer = org.apache.kafka.common.serialization.StringSerializer
kafka.producer.value.serializer = org.apache.kafka.common.serialization.StringSerializer


################# kafka consumer  消費者配置##################
# 如果爲true,消費者的偏移量將在後臺定期提交
kafka.consumer.enable.auto.commit = true
#消費監聽器容器併發數
kafka.consumer.concurrency = 3
#如何設置爲自動提交(enable.auto.commit=true),這裏設置自動提交週期
kafka.consumer.auto.commit.interval.ms=1000
#order-beta 消費者羣組ID,發佈-訂閱模式,即如果一個生產者,多個消費者都要消費,那麼需要定義自己的羣組,同一羣組內的消費者只有一個能消費到消息
kafka.consumer.group.id = sys_topone
kafka.alarm.topic = topone
#在使用Kafka的組管理時,用於檢測消費者故障的超時
kafka.consumer.session.timeout.ms = 30000
kafka.consumer.key.deserializer = org.apache.kafka.common.serialization.StringDeserializer
kafka.consumer.value.deserializer = org.apache.kafka.common.serialization.StringDeserializer

一 Spring mvc代碼配置kafka

1.創建Producer配置類

import org.apache.kafka.clients.CommonClientConfigs;
import org.apache.kafka.common.config.SaslConfigs;
import org.apache.kafka.common.serialization.StringSerializer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.protocol.types.Field;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.PropertySource;
import org.springframework.kafka.annotation.EnableKafka;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;

import java.util.HashMap;
import java.util.Map;

@Configuration
@EnableKafka
public class KafkaProducerConfig {
	//採用註解讀取kafka.properties字段
    @Value("${kafka.bootstrap.servers}")
    private String kafka_bootstrap_servers;
    @Value("${kafka.producer.acks}")
    private String kafka_producer_acks;
    @Value("${kafka.producer.retries}")
    private String kafka_producer_retries;
    @Value("${kafka.producer.linger.ms}")
    private String kafka_producer_linger_ms;
    @Value("${kafka.producer.buffer.memory}")
    private String kafka_producer_buffer_memory;
    @Value("${kafka.producer.batch.size}")
    private String kafka_producer_batch_size;
    @Value("${kafka.producer.defaultTopic}")
    private String kafka_producer_defaultTopic;

    @Value("${security.protocol}")
    private String security_protocol;
    @Value("${sasl.mechanism}")
    private String sasl_mechanism;
    @Value("${sasl.jaas.config}")
    private String sasl_jaas_config;

    @Bean
    public KafkaTemplate<String, String> kafkaTemplate() {
        KafkaTemplate<String, String> kafkaTemplate = new KafkaTemplate<String, String>(producerFactory());
        kafkaTemplate.setDefaultTopic(kafka_producer_defaultTopic);
        return kafkaTemplate;
    }

    public ProducerFactory<String, String> producerFactory() {
        Map<String, Object> properties = new HashMap<String, Object>();
        properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafka_bootstrap_servers);
        properties.put(ProducerConfig.RETRIES_CONFIG, kafka_producer_retries);
        properties.put(ProducerConfig.BATCH_SIZE_CONFIG, kafka_producer_batch_size);
        properties.put(ProducerConfig.LINGER_MS_CONFIG, kafka_producer_linger_ms);
        properties.put(ProducerConfig.BUFFER_MEMORY_CONFIG, kafka_producer_buffer_memory);
        properties.put(ProducerConfig.ACKS_CONFIG, kafka_producer_acks);
        properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
        properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);

		//設置安全驗證  此文配置都默認添加賬號驗證配置,如果kafka服務器沒有開啓sasl沒有開啓測無法連接服務器,已配置可以參考,沒有配置一定要取消
        properties.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, security_protocol);
        properties.put(SaslConfigs.SASL_MECHANISM, sasl_mechanism);
        properties.put(SaslConfigs.SASL_JAAS_CONFIG,sasl_jaas_config);
        return new DefaultKafkaProducerFactory<>(properties);
    }

}

2.創建Consumer配置類

import org.apache.kafka.clients.CommonClientConfigs;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.common.config.SaslConfigs;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.annotation.EnableKafka;
import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
import org.springframework.kafka.config.KafkaListenerContainerFactory;
import org.springframework.kafka.core.ConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;

import java.util.HashMap;
import java.util.Map;

@Configuration
@EnableKafka
public class KafkaConsumerConfig {
//採用註解讀取kafka.properties字段
    @Value("${kafka.bootstrap.servers}")
    private String kafka_bootstrap_servers;

    @Value("${kafka.consumer.enable.auto.commit}")
    private String kafka_consumer_enable_auto_commit;
    @Value("${kafka.consumer.concurrency}")
    private String kafka_consumer_concurrency;
    @Value("${kafka.consumer.auto.commit.interval.ms}")
    private String kafka_consumer_auto_commit_interval_ms;
    @Value("${kafka.consumer.group.id}")
    private String kafka_consumer_group_id;
    @Value("${kafka.consumer.session.timeout.ms}")
    private String kafka_consumer_session_timeout_ms;

    @Value("${security.protocol}")
    private String security_protocol;
    @Value("${sasl.mechanism}")
    private String sasl_mechanism;
    @Value("${sasl.jaas.config}")
    private String sasl_jaas_config;


    @Bean
    public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
        ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<String, String>();
        factory.setConsumerFactory(consumerFactory());
        factory.setConcurrency(kafka_consumer_concurrency);
        factory.getContainerProperties().setPollTimeout(4000);
        return factory;
    }


    public ConsumerFactory<String, String> consumerFactory() {
        Map<String, Object> properties = new HashMap<String, Object>();
        properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,kafka_bootstrap_servers);
        properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, kafka_consumer_enable_auto_commit);
        properties.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, kafka_consumer_auto_commit_interval_ms);
        properties.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, kafka_consumer_session_timeout_ms);
        properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        properties.put(ConsumerConfig.GROUP_ID_CONFIG, kafka_consumer_group_id);

		//設置安全驗證  此文配置都默認添加賬號驗證配置,如果kafka服務器沒有開啓sasl沒有開啓測無法連接服務器,已配置可以參考,沒有配置一定要取消
        properties.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, security_protocol);
        properties.put(SaslConfigs.SASL_MECHANISM, sasl_mechanism);
        properties.put(SaslConfigs.SASL_JAAS_CONFIG,sasl_jaas_config);
        return new DefaultKafkaConsumerFactory<>(properties);
    }

    @Bean
    public KafkaListeners kafkaListeners() {
        return new KafkaListeners();
    }

}

3.創建KafkaListeners消費類進行消費

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.kafka.annotation.KafkaListener;

import java.util.Optional;

public class KafkaListeners {
	//也可以在其他方法上通過添加@KafkaListener註解方法來監聽topic接收消息,使用非常方便
    @KafkaListener(topics = {"aaa"})
    public void listen(ConsumerRecord<?, ?> record) {
        Optional<?> kafkaMessage = Optional.ofNullable(record.value());
        if (kafkaMessage.isPresent()) {
            Object message = kafkaMessage.get();
            System.out.println("listen " + message);
        }
    }
    @KafkaListener(topics = {"bbb"})
    public void listen2(ConsumerRecord<?, ?> record) {
        Optional<?> kafkaMessage = Optional.ofNullable(record.value());
        if (kafkaMessage.isPresent()) {
            Object message = kafkaMessage.get();
            System.out.println("listen2 " + message);
        }
    }
}

4.測試

1.創建一個測試接口
 	@Autowired
    KafkaTemplate kafkaTemplate;

    @RequestMapping(value = "/test")
    @ResponseBody
    public Object test(HttpServletRequest request1, HttpServletResponse response1) {
        kafkaTemplate.sendDefault("111111");//發送的是kafka.properties默認的topic
        kafkaTemplate.send("aaa","22222");//發送自定義topic
        System.out.println("kafka消息發送成功!");
        return "ok";`在這裏插入代碼片`
    }

2.接收到消息
在這裏插入圖片描述

二 Spring mvc xml資源文件配置kafka方式

同樣需要使用上面第一個創建的公共配置文件kafka.properties
注意:需要在spring配置文件中引入以下兩個配置文件:

<import resource="classpath:spring-kafka-producer.xml"/>
<import resource="classpath:spring-kafka-consumer.xml"/>

1.資源目錄下新建生產者配置:spring-kafka-producer.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd">

    <!--<context:property-placeholder location="classpath:kafka/kafka.properties" />-->
    <!-- 定義producer的參數 -->
    <bean id="producerProperties" class="java.util.HashMap">
        <constructor-arg>
            <map>
                <!-- kafka服務地址,可能是集羣-->
                <entry key="bootstrap.servers" value="${kafka.bootstrap.servers}"/>
                <!-- 有可能導致broker接收到重複的消息,默認值爲3-->
                <entry key="retries" value="${kafka.producer.retries}"/>
                <!-- 每次批量發送消息的數量-->
                <entry key="batch.size" value="${kafka.producer.batch.size}"/>
                <!-- 默認0ms,在異步IO線程被觸發後(任何一個topic,partition滿都可以觸發)-->
                <entry key="linger.ms" value="${kafka.producer.linger.ms}"/>
                <!--producer可以用來緩存數據的內存大小。如果數據產生速度大於向broker發送的速度,producer會阻塞或者拋出異常 -->
                <entry key="buffer.memory" value="${kafka.producer.buffer.memory}"/>
                <!-- producer需要server接收到數據之後發出的確認接收的信號,此項配置就是指procuder需要多少個這樣的確認信號-->
                <entry key="acks" value="${kafka.producer.acks}"/>
                <entry key="key.serializer" value="${kafka.producer.key.serializer}"/>
                <entry key="value.serializer" value="${kafka.producer.value.serializer}"/>
				
						<!-- kafka sasl安全驗證配置,設置安全驗證  此文配置都默認添加賬號驗證配置,如果kafka服務器沒有開啓sasl沒有開啓測無法連接服務器,已配置可以參考,沒有配置一定要取消-->
                <entry key="sasl.jaas.config" value="${sasl.jaas.config}"/>
                <entry key="security.protocol" value="${security.protocol}"/>
                <entry key="sasl.mechanism" value="${sasl.mechanism}"/>
            </map>
        </constructor-arg>
    </bean>

    <!-- 創建kafkatemplate需要使用的producerfactory bean -->
    <bean id="producerFactory" class="org.springframework.kafka.core.DefaultKafkaProducerFactory">
        <constructor-arg>
            <ref bean="producerProperties"/>
        </constructor-arg>
    </bean>

    <!--定義生產者監聽 -->
    <bean id="kafkaProducerListener" class="com.test.kafka.KafkaProducerListener"/>

    <!-- 創建kafkatemplate bean,使用template的send消息方法 -->
    <bean id="kafkaTemplate" class="org.springframework.kafka.core.KafkaTemplate">
        <constructor-arg ref="producerFactory"/>
        <constructor-arg name="autoFlush" value="true"/>
        <!--設置默認的topic-->
        <property name="defaultTopic" value="${kafka.producer.defaultTopic}"/>
        <property name="producerListener" ref="kafkaProducerListener"/>
    </bean>
</beans>

2.資源目錄下新建消費者配置:spring-kafka-consumer.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd">
    <!-- 1.定義consumer的參數 -->
    <!--<context:property-placeholder location="classpath*:kafka/kafka.properties" />-->
    <bean id="consumerProperties" class="java.util.HashMap">
        <constructor-arg>
            <map>
                <!--Kafka服務地址 -->
                <entry key="bootstrap.servers" value="${kafka.bootstrap.servers}" />
                <!--Consumer的組ID,相同goup.id的consumer屬於同一個組。 -->
                <entry key="group.id" value="${kafka.consumer.group.id}" />
                <!--如果此值設置爲true,consumer會週期性的把當前消費的offset值保存到zookeeper。當consumer失敗重啓之後將會使用此值作爲新開始消費的值。 -->
                <entry key="enable.auto.commit" value="${kafka.consumer.enable.auto.commit}" />
                <!--網絡請求的socket超時時間。實際超時時間由max.fetch.wait + socket.timeout.ms 確定 -->
                <entry key="session.timeout.ms" value="${kafka.consumer.session.timeout.ms}" />
                <entry key="auto.commit.interval.ms" value="${kafka.consumer.auto.commit.interval.ms}" />
                <entry key="retry.backoff.ms" value="100" />
                <entry key="key.deserializer" value="${kafka.consumer.key.deserializer}" />
                <entry key="value.deserializer" value="${kafka.consumer.value.deserializer}" />

		<!--  kafka sasl安全驗證配置,設置安全驗證  此文配置都默認添加賬號驗證配置,如果kafka服務器沒有開啓sasl沒有開啓測無法連接服務器,已配置可以參考,沒有配置一定要取消 -->
                <entry key="sasl.jaas.config" value="${sasl.jaas.config}"/>
                <entry key="security.protocol" value="${security.protocol}"/>
                <entry key="sasl.mechanism" value="${sasl.mechanism}"/>
            </map>
        </constructor-arg>
    </bean>

    <!-- 2.創建consumerFactory bean -->
    <bean id="consumerFactory"
          class="org.springframework.kafka.core.DefaultKafkaConsumerFactory" >
        <constructor-arg>
            <ref bean="consumerProperties" />
        </constructor-arg>
    </bean>

    <!--3.指定具體監聽類的bean -->
    <bean id="kafkaConsumerService" class="com.test.kafka.KafkaConsumerMessageListener" />

    <!-- 4.消費者容器配置信息 -->
    <bean id="containerProperties" class="org.springframework.kafka.listener.ContainerProperties">
        <!-- topic -->
        <constructor-arg name="topics">
            <list>
            	 <!-- 監聽的topic,可以添加多個,onMessage方法中將會收到此處監聽的多個topic消息 -->
                <value>${kafka.alarm.topic}</value>
            </list>
        </constructor-arg>
        <property name="messageListener" ref="kafkaConsumerService" />
    </bean>
    <!-- 5.消費者併發消息監聽容器-->
    <bean id="messageListenerContainer" class="org.springframework.kafka.listener.ConcurrentMessageListenerContainer" init-method="doStart" >
        <constructor-arg ref="consumerFactory" />
        <constructor-arg ref="containerProperties" />
        <property name="concurrency" value="${kafka.consumer.concurrency}" />
    </bean>



    <!--如果你不想在一個類中監聽多個topic,那麼你可以複製3-5步,修改id後,在新建的監聽類中接受指定的topic -->
    <!--第二個消費類-->
    <!-- 3.指定具體監聽類的bean -->
    <bean id="kafkaConsumerService2" class="com.test.kafka.KafkaConsumerListenser" />

    <!-- 4.消費者容器配置信息 -->
    <bean id="containerProperties2" class="org.springframework.kafka.listener.ContainerProperties">
        <!-- topic -->
        <constructor-arg name="topics">
            <list>
                <value>aaa</value>
            </list>
        </constructor-arg>
        <property name="messageListener" ref="kafkaConsumerService2" />
    </bean>
    <!-- 5.消費者併發消息監聽容器,執行doStart()方法 使用的時候,只需要注入這個bean,即可使用template的send消息方法-->
    <bean id="messageListenerContainer2" class="org.springframework.kafka.listener.ConcurrentMessageListenerContainer" init-method="doStart" >
        <constructor-arg ref="consumerFactory" />
        <constructor-arg ref="containerProperties2" />
        <property name="concurrency" value="${kafka.consumer.concurrency}" />
    </bean>

</beans>

3.發送消息

跟代碼配置一樣使用

    @Autowired
    KafkaTemplate kafkaTemplate;

    @RequestMapping(value = "/test")
    @ResponseBody
    public Object test(HttpServletRequest request1, HttpServletResponse response1) {
        kafkaTemplate.sendDefault("111111");
        kafkaTemplate.send("aaa","22222");
        System.out.println("kafka消息發送成功!");
        return "ok";
    }

4.消費消息,實現MessageListener接口的onMessage接口

可以接受xml中配置指定的topic消息

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.kafka.listener.MessageListener;


public class KafkaConsumerMessageListener implements MessageListener<String, Object> {

    @Override
    public void onMessage(ConsumerRecord<String, Object> record) {
            System.out.println(" kafka接受到消息" + record.toString());
    }
}

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章