Kafka環境搭建及與Spring的整合測試

【前言】 Kafka作爲一種分佈式消息隊列的實現,採用發佈訂閱的消息模型。通過生產者把消息Topic存儲在broke,消息者可以訂閱多個感興趣的topic進行消費。消費者需要自己保留一個offset,從kafka 獲取消息時,只拉去當前offset 以後的消息。

Kafka相比於其他MQ(activemq、rabbitmq)優點:

1.高性能高吞吐量 ;  Kafka 集羣可以透明的擴展,增加新的服務器進集羣。
高性能。Kafka 的性能大大超過傳統的ActiveMQ、RabbitMQ等MQ 實現,尤其是Kafka 還支持batch 操作。

2.容錯能力。kafka把每個partition的數據複製到幾臺服務器上,一臺broke 出故障時,zookeeper服務將通知生產者和消費者,從而使用其他的broke節點。

【環境搭建】Linux centos下安裝zookeeper-3.3.6,bin目錄下啓動服務

 ./zkServer.sh start 

,安裝kafka_2.10-0.10.0.1 ,bin目錄下啓動服務【注:kafka客戶端jar包版本需要和服務端版本保持一致,如客戶端爲kafka-client-0.10需要和本次安裝的服務端版本保持一致】
./kafka-server-start.sh  ../config/server.properties 
【程序測試】項目目錄:


pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>org.kafka</groupId>
    <artifactId>kafkaDemo</artifactId>
    <version>1.0-SNAPSHOT</version>
    <properties>
        <spring.version>4.3.3.RELEASE</spring.version>
        <slf4j.version>1.7.5</slf4j.version>
    </properties>
    <dependencies>

        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-beans</artifactId>
            <version>${spring.version}</version>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-context</artifactId>
            <version>${spring.version}</version>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-core</artifactId>
            <version>${spring.version}</version>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-tx</artifactId>
            <version>${spring.version}</version>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-log4j12</artifactId>
            <version>${slf4j.version}</version>
        </dependency>
        <dependency>
            <groupId>org.springframework.kafka</groupId>
            <artifactId>spring-kafka</artifactId>
            <version>1.1.1.RELEASE</version>
        </dependency>
        <dependency>
            <groupId>org.springframework.integration</groupId>
            <artifactId>spring-integration-kafka</artifactId>
            <version>2.1.0.RELEASE</version>
        </dependency>
        <dependency>
            <groupId>log4j</groupId>
            <artifactId>log4j</artifactId>
            <version>1.2.15</version>
            <exclusions>
                <exclusion>
                    <artifactId>jmxtools</artifactId>
                    <groupId>com.sun.jdmk</groupId>
                </exclusion>
                <exclusion>
                    <artifactId>jmxri</artifactId>
                    <groupId>com.sun.jmx</groupId>
                </exclusion>
                <exclusion>
                    <artifactId>jms</artifactId>
                    <groupId>javax.jms</groupId>
                </exclusion>
                <exclusion>
                    <artifactId>mail</artifactId>
                    <groupId>javax.mail</groupId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.12</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-test</artifactId>
            <version>${spring.version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>com.101tec</groupId>
            <artifactId>zkclient</artifactId>
            <version>0.4</version>
        </dependency>
        <dependency>
            <groupId>org.apache.zookeeper</groupId>
            <artifactId>zookeeper</artifactId>
            <version>3.3.2</version>
            <exclusions>
                <exclusion>
                    <artifactId>jmxri</artifactId>
                    <groupId>com.sun.jmx</groupId>
                </exclusion>
                <exclusion>
                    <artifactId>jmxtools</artifactId>
                    <groupId>com.sun.jdmk</groupId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-context-support</artifactId>
            <version>3.2.6.RELEASE</version>
        </dependency>
        <dependency>
            <groupId>com.alibaba</groupId>
            <artifactId>fastjson</artifactId>
            <version>1.2.12</version>
        </dependency>
    </dependencies>
    <build>
        <resources>
            <resource>
                <directory>src/main/resources</directory>
                <excludes>
                    <exclude>*.xml</exclude>
                    <exclude>spring/*.xml</exclude>
                    <exclude>conf*/*</exclude>
                </excludes>
            </resource>
        </resources>
        <plugins>
            <plugin>
                <artifactId>maven-assembly-plugin</artifactId>
                <configuration><!--描述執行的文件路徑-->
                    <descriptor>src/main/assembly/assembly.xml</descriptor>
                    <appendAssemblyId>false</appendAssemblyId>
                    <finalName>kafkaDemo</finalName>
                </configuration>
                <executions><!--執行器,執行器的名稱,綁定到package的生命週期上,single表示只執行一次-->
                    <execution>
                        <id>make-assembly</id>
                        <phase>package</phase>
                        <goals>
                            <goal>single</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</project>

producerSerivce接口:

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.core.KafkaTemplate;
public class ProducerService {
    @Autowired
    private KafkaTemplate<String,String> kafkaTemplate;

    protected Logger logger  = LoggerFactory.getLogger(this.getClass());

    public void sendMessage(String msg){
        logger.info("----------------進入sendMessage------------");
        kafkaTemplate.sendDefault(msg);
        logger.info("----------------sendMessage完成------------");
    }

    public KafkaTemplate<String, String> getKafkaTemplate() {
        return kafkaTemplate;
    }

    public void setKafkaTemplate(KafkaTemplate<String, String> kafkaTemplate) {
        this.kafkaTemplate = kafkaTemplate;
    }
}
msgProducer:實際的生產者

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
public class MsgProducer {
    Logger logger  = LoggerFactory.getLogger(this.getClass());
    @Autowired
    private ProducerService producerService;

    public void sendMsg(String msg){
        logger.info("----------生產者發送一條消息-----------");
        logger.info("----------消息內容:"+msg);
        producerService.sendMessage(msg);
    }

    public ProducerService getProducerService() {
        return producerService;
    }

    public void setProducerService(ProducerService producerService) {
        this.producerService = producerService;
    }
}

pojo:

public class Person implements Serializable{
    
    public String name;
    public int age;

    public Person(){}

    public Person(String name,int age){
        this.name = name;
        this.age = age;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public int getAge() {
        return age;
    }

    public void setAge(int age) {
        this.age = age;
    }

    @Override
    public String toString() {
        return "Person{" +
                "name='" + name + '\'' +
                ", age=" + age +
                '}';
    }
}
消費者:

import com.alibaba.fastjson.JSON;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.kafka.listener.MessageListener;
import pojo.Person;

import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class ConSumerService implements MessageListener<String,String> {

    ExecutorService executorService = Executors.newFixedThreadPool(2);

     Logger logger  = LoggerFactory.getLogger(this.getClass());

    @Override
    public void onMessage(final ConsumerRecord<String, String> stringStringConsumerRecord) {
        if(stringStringConsumerRecord == null){
            logger.info("record is null");
            return;
        }
        executorService.execute(new Runnable() {
            @Override
            public void run() {
                try {
                    logger.info("收到一條消息"+stringStringConsumerRecord.toString());
                   Person person =  JSON.parseObject(stringStringConsumerRecord.value(), Person.class);
                    System.out.println(person);
                } catch (Exception e) {
                    e.printStackTrace();
                }
            }
        });

    }
}
Spring :applicationContext.xml
<?xml version="1.0" encoding="UTF-8"?>
<!-- - Application context definition for JPetStore's business layer. - Contains bean references to the transaction manager 
  and to the DAOs in - dataAccessContext-local/jta.xml (see web.xml's "contextConfigLocation"). -->
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xmlns:util="http://www.springframework.org/schema/util" xmlns:context="http://www.springframework.org/schema/context"
  xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.2.xsd
   http://www.springframework.org/schema/context
           http://www.springframework.org/schema/context/spring-context-3.2.xsd">
  <!-- ========================= GENERAL DEFINITIONS ========================= -->

  <context:annotation-config />
  <context:component-scan base-package="Producer"/>
  <context:component-scan base-package="Consumer"/>
  <context:property-placeholder location="classpath:conf/*.properties"/>
  <import resource="classpath:spring/kafka-*.xml" />
</beans>

kafka-producer.xml:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:int-kafka="http://www.springframework.org/schema/integration/kafka"
       xmlns:int="http://www.springframework.org/schema/integration"
       xsi:schemaLocation="
       http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd
       http://www.springframework.org/schema/integration
       http://www.springframework.org/schema/integration/spring-integration.xsd
       http://www.springframework.org/schema/integration/kafka
       http://www.springframework.org/schema/integration/kafka/spring-integration-kafka.xsd
       ">

    <bean id="kafkaTemplate" class="org.springframework.kafka.core.KafkaTemplate">
        <constructor-arg>
            <bean class="org.springframework.kafka.core.DefaultKafkaProducerFactory">
                <constructor-arg>
                    <map>
                        <entry key="bootstrap.servers" value="${kafka.producer.bootstrap.servers}"/>
                        <entry key="producer.type" value="${producer.type}"/>
                        <entry key="group.id" value="${kafka.producer.group.id}"/>
                        <entry key="key.serializer" value="org.apache.kafka.common.serialization.StringSerializer"/>
                        <entry key="value.serializer" value="org.apache.kafka.common.serialization.StringSerializer"/>
                    </map>
                </constructor-arg>
            </bean>
        </constructor-arg>
        <constructor-arg name="autoFlush" value="true"/>
        <property name="defaultTopic" value="testTopic"/>
    </bean>

    <bean id="kafkaProducerService" class="Producer.ProducerService"/>

</beans>

kafka-consumer.xml:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:int-kafka="http://www.springframework.org/schema/integration/kafka"
       xmlns:int="http://www.springframework.org/schema/integration"
       xsi:schemaLocation="
       http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd
       http://www.springframework.org/schema/integration
       http://www.springframework.org/schema/integration/spring-integration.xsd
       http://www.springframework.org/schema/integration/kafka
       http://www.springframework.org/schema/integration/kafka/spring-integration-kafka.xsd
       ">

    <bean id="kafkaTemplate" class="org.springframework.kafka.core.KafkaTemplate">
        <constructor-arg>
            <bean class="org.springframework.kafka.core.DefaultKafkaProducerFactory">
                <constructor-arg>
                    <map>
                        <entry key="bootstrap.servers" value="${kafka.producer.bootstrap.servers}"/>
                        <entry key="producer.type" value="${producer.type}"/>
                        <entry key="group.id" value="${kafka.producer.group.id}"/>
                        <entry key="key.serializer" value="org.apache.kafka.common.serialization.StringSerializer"/>
                        <entry key="value.serializer" value="org.apache.kafka.common.serialization.StringSerializer"/>
                    </map>
                </constructor-arg>
            </bean>
        </constructor-arg>
        <constructor-arg name="autoFlush" value="true"/>
        <property name="defaultTopic" value="testTopic"/>
    </bean>

    <bean id="kafkaProducerService" class="Producer.ProducerService"/>

</beans>

kafka-service.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:int="http://www.springframework.org/schema/integration"
       xmlns:int-kafka="http://www.springframework.org/schema/integration/kafka"
       xsi:schemaLocation="
       http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd
       http://www.springframework.org/schema/integration
       http://www.springframework.org/schema/integration/spring-integration.xsd
       http://www.springframework.org/schema/integration/kafka
       http://www.springframework.org/schema/integration/kafka/spring-integration-kafka.xsd
       ">
    <!-- 定義consumer的參數 -->
    <bean id="consumerProperties" class="java.util.HashMap">
        <constructor-arg>
            <map>
                <entry key="bootstrap.servers" value="${kafka.consumer.bootstrap.servers}"/>
                <entry key="group.id" value="${kafka.consumer.group.id}"/>
                <entry key="key.deserializer" value="${kafka.consumer.key.deserializer}"/>
                <entry key="value.deserializer" value="${kafka.consumer.value.deserializer}"/>
                <entry key="enable.auto.commit" value="true"/>
                <entry key="auto.commit.interval.ms" value="60000"/>
            </map>
        </constructor-arg>
    </bean>

    <!-- 創建consumerFactory bean -->
    <bean id="consumerFactory" class="org.springframework.kafka.core.DefaultKafkaConsumerFactory">
        <constructor-arg>
            <ref bean="consumerProperties"/>
        </constructor-arg>
    </bean>

    <!-- 實際執行消息消費的類 -->

    <bean id="messageListernerConsumerService" class="Producer.MsgProducer.MsgConsume"/>
    <!-- 消費者容器配置信息 -->
    <bean id="containerProperties" class="org.springframework.kafka.listener.config.ContainerProperties">
        <constructor-arg value="testTopic"/>
        <property name="messageListener" ref="messageListernerConsumerService"/>
    </bean>

    <bean id="messageListenerContainer" class="org.springframework.kafka.listener.KafkaMessageListenerContainer"
          init-method="doStart">
        <constructor-arg ref="consumerFactory"/>
        <constructor-arg ref="containerProperties"/>
    </bean>
</beans>


屬性配置文件:

consumer.properties:

zookeeper.connect=127.0.0.1:2181
kafka.consumer.bootstrap.servers=127.0.0.1:9092
##,127.0.0.1:2182,127.0.0.1:2183
# timeout in ms for connecting to zookeeper
zookeeper.connectiontimeout.ms=1000000
kafka.consumer.group.id=0
kafka.consumer.key.deserializer=org.apache.kafka.common.serialization.StringDeserializer
kafka.consumer.value.deserializer=org.apache.kafka.common.serialization.StringDeserializer
auto.commit.enable=true
auto.commit.interval.ms=60000
log4j.properties:

log4j.rootLogger=INFO,stdout

log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d %p [%c] - %m%n

producer.properties:

##broker列表可以爲kafka server的子集,因爲producer需要從broker中獲取metadata
##儘管每個broker都可以提供metadata,此處還是建議,將所有broker都列舉出來
kafka.producer.bootstrap.servers=127.0.0.1:9092
##,127.0.0.1:9093
##async  異步
producer.type=async
compression.codec=0
kafka.producer.group.id=0
cronExpression=0 0 1 * * ?
##在producer.type=async時有效
#batch.num.messages=100

測試類KafkaTest:

import Producer.MsgProducer;
import com.alibaba.fastjson.JSON;
import com.alibaba.fastjson.serializer.SerializerFeature;
import org.springframework.context.support.ClassPathXmlApplicationContext;
import pojo.Person;
public class KafkaTest {

 //   private  MsgProducer msgProducer;


    public static void main(String[] args){
        ClassPathXmlApplicationContext applicationContext =
                new ClassPathXmlApplicationContext("classpath:spring/applicationContext.xml");
        MsgProducer msgProducer = applicationContext.getBean("msgProducer",MsgProducer.class);
        for(int i = 0;i < 10;i++){
            Person  person = new Person("dh",i+1);
            msgProducer.sendMsg(JSON.toJSONString(person, SerializerFeature.BrowserCompatible,SerializerFeature.WriteClassName));
        }
    }

}


【測試結果】

2016-12-14 10:07:27,809 INFO [org.springframework.context.support.ClassPathXmlApplicationContext] - Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@37ada1e0: startup date [Wed Dec 14 10:07:27 CST 2016]; root of context hierarchy
2016-12-14 10:07:27,903 INFO [org.springframework.beans.factory.xml.XmlBeanDefinitionReader] - Loading XML bean definitions from class path resource [applicationContext.xml]
2016-12-14 10:07:28,750 INFO [org.springframework.beans.factory.xml.XmlBeanDefinitionReader] - Loading XML bean definitions from file [/home/hd/kafkaDemo/test/kafka-consumer.xml]
2016-12-14 10:07:28,909 INFO [org.springframework.beans.factory.xml.XmlBeanDefinitionReader] - Loading XML bean definitions from file [/home/hd/kafkaDemo/test/kafka-producer.xml]
2016-12-14 10:07:29,016 INFO [org.springframework.beans.factory.xml.XmlBeanDefinitionReader] - Loading XML bean definitions from file [/home/hd/kafkaDemo/test/kafka-service.xml]
2016-12-14 10:07:29,244 INFO [org.springframework.context.support.PropertySourcesPlaceholderConfigurer] - Loading properties file from file [/home/hd/kafkaDemo/test/consumer.properties]
2016-12-14 10:07:29,244 INFO [org.springframework.context.support.PropertySourcesPlaceholderConfigurer] - Loading properties file from file [/home/hd/kafkaDemo/test/log4j.properties]
2016-12-14 10:07:29,245 INFO [org.springframework.context.support.PropertySourcesPlaceholderConfigurer] - Loading properties file from file [/home/hd/kafkaDemo/test/producer.properties]
2016-12-14 10:07:29,701 INFO [org.apache.kafka.clients.consumer.ConsumerConfig] - ConsumerConfig values: 
	interceptor.classes = null
	request.timeout.ms = 40000
	check.crcs = true
	ssl.truststore.password = null
	retry.backoff.ms = 100
	ssl.keymanager.algorithm = SunX509
	receive.buffer.bytes = 65536
	ssl.key.password = null
	ssl.cipher.suites = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.service.name = null
	ssl.provider = null
	session.timeout.ms = 30000
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.mechanism = GSSAPI
	max.poll.records = 2147483647
	bootstrap.servers = [localhost:9092]
	client.id = 
	fetch.max.wait.ms = 500
	fetch.min.bytes = 1
	key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
	auto.offset.reset = latest
	value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	max.partition.fetch.bytes = 1048576
	partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
	ssl.endpoint.identification.algorithm = null
	ssl.keystore.location = null
	ssl.truststore.location = null
	exclude.internal.topics = true
	ssl.keystore.password = null
	metrics.sample.window.ms = 30000
	security.protocol = PLAINTEXT
	metadata.max.age.ms = 300000
	auto.commit.interval.ms = 60000
	ssl.protocol = TLS
	sasl.kerberos.min.time.before.relogin = 60000
	connections.max.idle.ms = 540000
	ssl.trustmanager.algorithm = PKIX
	group.id = 0
	enable.auto.commit = true
	metric.reporters = []
	ssl.truststore.type = JKS
	send.buffer.bytes = 131072
	reconnect.backoff.ms = 50
	metrics.num.samples = 2
	ssl.keystore.type = JKS
	heartbeat.interval.ms = 3000

2016-12-14 10:07:30,112 INFO [org.apache.kafka.clients.consumer.ConsumerConfig] - ConsumerConfig values: 
	interceptor.classes = null
	request.timeout.ms = 40000
	check.crcs = true
	ssl.truststore.password = null
	retry.backoff.ms = 100
	ssl.keymanager.algorithm = SunX509
	receive.buffer.bytes = 65536
	ssl.key.password = null
	ssl.cipher.suites = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.service.name = null
	ssl.provider = null
	session.timeout.ms = 30000
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.mechanism = GSSAPI
	max.poll.records = 2147483647
	bootstrap.servers = [localhost:9092]
	client.id = consumer-1
	fetch.max.wait.ms = 500
	fetch.min.bytes = 1
	key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
	auto.offset.reset = latest
	value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	max.partition.fetch.bytes = 1048576
	partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
	ssl.endpoint.identification.algorithm = null
	ssl.keystore.location = null
	ssl.truststore.location = null
	exclude.internal.topics = true
	ssl.keystore.password = null
	metrics.sample.window.ms = 30000
	security.protocol = PLAINTEXT
	metadata.max.age.ms = 300000
	auto.commit.interval.ms = 60000
	ssl.protocol = TLS
	sasl.kerberos.min.time.before.relogin = 60000
	connections.max.idle.ms = 540000
	ssl.trustmanager.algorithm = PKIX
	group.id = 0
	enable.auto.commit = true
	metric.reporters = []
	ssl.truststore.type = JKS
	send.buffer.bytes = 131072
	reconnect.backoff.ms = 50
	metrics.num.samples = 2
	ssl.keystore.type = JKS
	heartbeat.interval.ms = 3000

2016-12-14 10:07:30,150 INFO [org.apache.kafka.common.utils.AppInfoParser] - Kafka version : 0.10.0.1
2016-12-14 10:07:30,150 INFO [org.apache.kafka.common.utils.AppInfoParser] - Kafka commitId : a7a17cdec9eaa6c5
2016-12-14 10:07:30,405 INFO [org.springframework.context.support.DefaultLifecycleProcessor] - Starting beans in phase 0
2016-12-14 10:07:30,628 INFO [Producer.MsgProducer] - ----------生產者發送一條消息-----------
2016-12-14 10:07:30,628 INFO [Producer.MsgProducer] - ----------消息內容:{"@type":"pojo.Person","age":1,"name":"dh"}
2016-12-14 10:07:30,628 INFO [Producer.ProducerService] - ----------------進入sendMessage------------
2016-12-14 10:07:30,634 INFO [org.apache.kafka.clients.producer.ProducerConfig] - ProducerConfig values: 
	interceptor.classes = null
	request.timeout.ms = 30000
	ssl.truststore.password = null
	retry.backoff.ms = 100
	buffer.memory = 33554432
	batch.size = 16384
	ssl.keymanager.algorithm = SunX509
	receive.buffer.bytes = 32768
	ssl.key.password = null
	ssl.cipher.suites = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.service.name = null
	ssl.provider = null
	max.in.flight.requests.per.connection = 5
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.mechanism = GSSAPI
	bootstrap.servers = [localhost:9092]
	client.id = 
	max.request.size = 1048576
	acks = 1
	linger.ms = 0
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	metadata.fetch.timeout.ms = 60000
	ssl.endpoint.identification.algorithm = null
	ssl.keystore.location = null
	value.serializer = class org.apache.kafka.common.serialization.StringSerializer
	ssl.truststore.location = null
	ssl.keystore.password = null
	block.on.buffer.full = false
	key.serializer = class org.apache.kafka.common.serialization.StringSerializer
	metrics.sample.window.ms = 30000
	security.protocol = PLAINTEXT
	metadata.max.age.ms = 300000
	ssl.protocol = TLS
	sasl.kerberos.min.time.before.relogin = 60000
	timeout.ms = 30000
	connections.max.idle.ms = 540000
	ssl.trustmanager.algorithm = PKIX
	metric.reporters = []
	ssl.truststore.type = JKS
	compression.type = none
	retries = 0
	max.block.ms = 60000
	partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
	send.buffer.bytes = 131072
	reconnect.backoff.ms = 50
	metrics.num.samples = 2
	ssl.keystore.type = JKS

2016-12-14 10:07:30,692 INFO [org.apache.kafka.clients.producer.ProducerConfig] - ProducerConfig values: 
	interceptor.classes = null
	request.timeout.ms = 30000
	ssl.truststore.password = null
	retry.backoff.ms = 100
	buffer.memory = 33554432
	batch.size = 16384
	ssl.keymanager.algorithm = SunX509
	receive.buffer.bytes = 32768
	ssl.key.password = null
	ssl.cipher.suites = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.service.name = null
	ssl.provider = null
	max.in.flight.requests.per.connection = 5
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.mechanism = GSSAPI
	bootstrap.servers = [localhost:9092]
	client.id = producer-1
	max.request.size = 1048576
	acks = 1
	linger.ms = 0
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	metadata.fetch.timeout.ms = 60000
	ssl.endpoint.identification.algorithm = null
	ssl.keystore.location = null
	value.serializer = class org.apache.kafka.common.serialization.StringSerializer
	ssl.truststore.location = null
	ssl.keystore.password = null
	block.on.buffer.full = false
	key.serializer = class org.apache.kafka.common.serialization.StringSerializer
	metrics.sample.window.ms = 30000
	security.protocol = PLAINTEXT
	metadata.max.age.ms = 300000
	ssl.protocol = TLS
	sasl.kerberos.min.time.before.relogin = 60000
	timeout.ms = 30000
	connections.max.idle.ms = 540000
	ssl.trustmanager.algorithm = PKIX
	metric.reporters = []
	ssl.truststore.type = JKS
	compression.type = none
	retries = 0
	max.block.ms = 60000
	partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
	send.buffer.bytes = 131072
	reconnect.backoff.ms = 50
	metrics.num.samples = 2
	ssl.keystore.type = JKS

2016-12-14 10:07:30,695 WARN [org.apache.kafka.clients.producer.ProducerConfig] - The configuration group.id = 0 was supplied but isn't a known config.
2016-12-14 10:07:30,695 WARN [org.apache.kafka.clients.producer.ProducerConfig] - The configuration producer.type = async was supplied but isn't a known config.
2016-12-14 10:07:30,695 INFO [org.apache.kafka.common.utils.AppInfoParser] - Kafka version : 0.10.0.1
2016-12-14 10:07:30,695 INFO [org.apache.kafka.common.utils.AppInfoParser] - Kafka commitId : a7a17cdec9eaa6c5
2016-12-14 10:07:30,873 INFO [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] - Discovered coordinator localhost:9092 (id: 2147483647 rack: null) for group 0.
2016-12-14 10:07:30,873 INFO [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Revoking previously assigned partitions [] for group 0
2016-12-14 10:07:30,873 INFO [org.springframework.kafka.listener.KafkaMessageListenerContainer] - partitions revoked:[]
2016-12-14 10:07:30,873 INFO [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] - (Re-)joining group 0
2016-12-14 10:07:31,299 INFO [Producer.ProducerService] - ----------------sendMessage完成------------
2016-12-14 10:07:31,299 INFO [Producer.MsgProducer] - ----------生產者發送一條消息-----------
2016-12-14 10:07:31,299 INFO [Producer.MsgProducer] - ----------消息內容:{"@type":"pojo.Person","age":2,"name":"dh"}
2016-12-14 10:07:31,299 INFO [Producer.ProducerService] - ----------------進入sendMessage------------
2016-12-14 10:07:31,310 INFO [Producer.ProducerService] - ----------------sendMessage完成------------
2016-12-14 10:07:31,310 INFO [Producer.MsgProducer] - ----------生產者發送一條消息-----------
2016-12-14 10:07:31,310 INFO [Producer.MsgProducer] - ----------消息內容:{"@type":"pojo.Person","age":3,"name":"dh"}
2016-12-14 10:07:31,310 INFO [Producer.ProducerService] - ----------------進入sendMessage------------
2016-12-14 10:07:31,317 INFO [Producer.ProducerService] - ----------------sendMessage完成------------
2016-12-14 10:07:31,317 INFO [Producer.MsgProducer] - ----------生產者發送一條消息-----------
2016-12-14 10:07:31,317 INFO [Producer.MsgProducer] - ----------消息內容:{"@type":"pojo.Person","age":4,"name":"dh"}
2016-12-14 10:07:31,317 INFO [Producer.ProducerService] - ----------------進入sendMessage------------
2016-12-14 10:07:31,324 INFO [Producer.ProducerService] - ----------------sendMessage完成------------
2016-12-14 10:07:31,324 INFO [Producer.MsgProducer] - ----------生產者發送一條消息-----------
2016-12-14 10:07:31,324 INFO [Producer.MsgProducer] - ----------消息內容:{"@type":"pojo.Person","age":5,"name":"dh"}
2016-12-14 10:07:31,324 INFO [Producer.ProducerService] - ----------------進入sendMessage------------
2016-12-14 10:07:31,332 INFO [Producer.ProducerService] - ----------------sendMessage完成------------
2016-12-14 10:07:31,332 INFO [Producer.MsgProducer] - ----------生產者發送一條消息-----------
2016-12-14 10:07:31,332 INFO [Producer.MsgProducer] - ----------消息內容:{"@type":"pojo.Person","age":6,"name":"dh"}
2016-12-14 10:07:31,332 INFO [Producer.ProducerService] - ----------------進入sendMessage------------
2016-12-14 10:07:31,348 INFO [Producer.ProducerService] - ----------------sendMessage完成------------
2016-12-14 10:07:31,348 INFO [Producer.MsgProducer] - ----------生產者發送一條消息-----------
2016-12-14 10:07:31,349 INFO [Producer.MsgProducer] - ----------消息內容:{"@type":"pojo.Person","age":7,"name":"dh"}
2016-12-14 10:07:31,349 INFO [Producer.ProducerService] - ----------------進入sendMessage------------
2016-12-14 10:07:31,360 INFO [Producer.ProducerService] - ----------------sendMessage完成------------
2016-12-14 10:07:31,360 INFO [Producer.MsgProducer] - ----------生產者發送一條消息-----------
2016-12-14 10:07:31,360 INFO [Producer.MsgProducer] - ----------消息內容:{"@type":"pojo.Person","age":8,"name":"dh"}
2016-12-14 10:07:31,360 INFO [Producer.ProducerService] - ----------------進入sendMessage------------
2016-12-14 10:07:31,363 INFO [Producer.ProducerService] - ----------------sendMessage完成------------
2016-12-14 10:07:31,363 INFO [Producer.MsgProducer] - ----------生產者發送一條消息-----------
2016-12-14 10:07:31,363 INFO [Producer.MsgProducer] - ----------消息內容:{"@type":"pojo.Person","age":9,"name":"dh"}
2016-12-14 10:07:31,363 INFO [Producer.ProducerService] - ----------------進入sendMessage------------
2016-12-14 10:07:31,373 INFO [Producer.ProducerService] - ----------------sendMessage完成------------
2016-12-14 10:07:31,373 INFO [Producer.MsgProducer] - ----------生產者發送一條消息-----------
2016-12-14 10:07:31,373 INFO [Producer.MsgProducer] - ----------消息內容:{"@type":"pojo.Person","age":10,"name":"dh"}
2016-12-14 10:07:31,373 INFO [Producer.ProducerService] - ----------------進入sendMessage------------
2016-12-14 10:07:31,377 INFO [Producer.ProducerService] - ----------------sendMessage完成------------
2016-12-14 10:07:32,049 INFO [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] - Successfully joined group 0 with generation 1
2016-12-14 10:07:32,051 INFO [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Setting newly assigned partitions [testTopic-0] for group 0
2016-12-14 10:07:32,051 INFO [org.springframework.kafka.listener.KafkaMessageListenerContainer] - partitions assigned:[testTopic-0]
2016-12-14 10:07:32,501 INFO [Consumer.ConSumerService] - 收到一條消息ConsumerRecord(topic = testTopic, partition = 0, offset = 11, CreateTime = 1481681250787, checksum = 1610398819, serialized key size = -1, serialized value size = 43, key = null, value = {"@type":"pojo.Person","age":1,"name":"dh"})
2016-12-14 10:07:32,516 INFO [Consumer.ConSumerService] - 收到一條消息ConsumerRecord(topic = testTopic, partition = 0, offset = 12, CreateTime = 1481681251299, checksum = 3727489338, serialized key size = -1, serialized value size = 43, key = null, value = {"@type":"pojo.Person","age":2,"name":"dh"})
Person{name='dh', age=1}
2016-12-14 10:07:32,524 INFO [Consumer.ConSumerService] - 收到一條消息ConsumerRecord(topic = testTopic, partition = 0, offset = 13, CreateTime = 1481681251310, checksum = 2921922270, serialized key size = -1, serialized value size = 43, key = null, value = {"@type":"pojo.Person","age":3,"name":"dh"})
Person{name='dh', age=3}
2016-12-14 10:07:32,524 INFO [Consumer.ConSumerService] - 收到一條消息ConsumerRecord(topic = testTopic, partition = 0, offset = 14, CreateTime = 1481681251317, checksum = 2739889014, serialized key size = -1, serialized value size = 43, key = null, value = {"@type":"pojo.Person","age":4,"name":"dh"})
Person{name='dh', age=4}
2016-12-14 10:07:32,524 INFO [Consumer.ConSumerService] - 收到一條消息ConsumerRecord(topic = testTopic, partition = 0, offset = 15, CreateTime = 1481681251324, checksum = 2936640231, serialized key size = -1, serialized value size = 43, key = null, value = {"@type":"pojo.Person","age":5,"name":"dh"})
Person{name='dh', age=5}
2016-12-14 10:07:32,524 INFO [Consumer.ConSumerService] - 收到一條消息ConsumerRecord(topic = testTopic, partition = 0, offset = 16, CreateTime = 1481681251333, checksum = 3197654111, serialized key size = -1, serialized value size = 43, key = null, value = {"@type":"pojo.Person","age":6,"name":"dh"})
Person{name='dh', age=6}
2016-12-14 10:07:32,524 INFO [Consumer.ConSumerService] - 收到一條消息ConsumerRecord(topic = testTopic, partition = 0, offset = 17, CreateTime = 1481681251349, checksum = 166554300, serialized key size = -1, serialized value size = 43, key = null, value = {"@type":"pojo.Person","age":7,"name":"dh"})
Person{name='dh', age=7}
2016-12-14 10:07:32,525 INFO [Consumer.ConSumerService] - 收到一條消息ConsumerRecord(topic = testTopic, partition = 0, offset = 18, CreateTime = 1481681251360, checksum = 880325773, serialized key size = -1, serialized value size = 43, key = null, value = {"@type":"pojo.Person","age":8,"name":"dh"})
Person{name='dh', age=8}
2016-12-14 10:07:32,525 INFO [Consumer.ConSumerService] - 收到一條消息ConsumerRecord(topic = testTopic, partition = 0, offset = 19, CreateTime = 1481681251363, checksum = 321016300, serialized key size = -1, serialized value size = 43, key = null, value = {"@type":"pojo.Person","age":9,"name":"dh"})
Person{name='dh', age=9}
2016-12-14 10:07:32,525 INFO [Consumer.ConSumerService] - 收到一條消息ConsumerRecord(topic = testTopic, partition = 0, offset = 20, CreateTime = 1481681251373, checksum = 795242757, serialized key size = -1, serialized value size = 44, key = null, value = {"@type":"pojo.Person","age":10,"name":"dh"})
Person{name='dh', age=10}
Person{name='dh', age=2}

【總結】在訪問量劇增的情況下,應用仍然需要繼續發揮作用,但是這樣的突發流量並不常見;如果爲以能處理這類峯值訪問爲標準來投入資源隨時待命無疑是巨大的浪費。使用消息隊列能夠使應用頂住突發的訪問壓力,而不會因爲突發的超負荷的請求而完全崩潰。即將高併發產生的請求信息放至消息隊列中,以削平高峯時的併發事務,從而提高系統的處理性能。

【附錄】kafka深度解析:點擊打開鏈接


zookeeper原理:點擊打開鏈接

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章