分佈式消息隊列kafka系列介紹 — 核心API介紹及實例

轉自:http://www.inter12.org/archives/834

一 PRODUCER的API

1.Producer的創建,依賴於ProducerConfig
public Producer(ProducerConfig config);

2.單個或是批量的消息發送
public void send(KeyedMessage<K,V> message);
public void send(List<KeyedMessage<K,V>> messages);

3.關閉Producer到所有broker的連接
public void close();

二 CONSUMER的高層API

主要是Consumer和ConsumerConnector,這裏的Consumer是ConsumerConnector的靜態工廠類
class Consumer {
public static kafka.javaapi.consumer.ConsumerConnector createJavaConsumerConnector(config: ConsumerConfig);
}

具體的消息的消費都是在ConsumerConnector中
創建一個消息處理的流,包含所有的topic,並根據指定的Decoder
public <K,V> Map<String, List<KafkaStream<K,V>>>
createMessageStreams(Map<String, Integer> topicCountMap, Decoder<K> keyDecoder, Decoder<V> valueDecoder);

創建一個消息處理的流,包含所有的topic,使用默認的Decoder
public Map<String, List<KafkaStream<byte[], byte[]>>> createMessageStreams(Map<String, Integer> topicCountMap);

獲取指定消息的topic,並根據指定的Decoder
public <K,V> List<KafkaStream<K,V>>
createMessageStreamsByFilter(TopicFilter topicFilter, int numStreams, Decoder<K> keyDecoder, Decoder<V> valueDecoder);

獲取指定消息的topic,使用默認的Decoder
public List<KafkaStream<byte[], byte[]>> createMessageStreamsByFilter(TopicFilter topicFilter);

提交偏移量到這個消費者連接的topic
public void commitOffsets();

關閉消費者
public void shutdown();

高層的API中比較常用的就是public List<KafkaStream<byte[], byte[]>> createMessageStreamsByFilter(TopicFilter topicFilter);和public void commitOffsets();

三 CONSUMER的簡單API–SIMPLECONSUMER

批量獲取消息
public FetchResponse fetch(request: kafka.javaapi.FetchRequest);

獲取topic的元信息
public kafka.javaapi.TopicMetadataResponse send(request: kafka.javaapi.TopicMetadataRequest);

獲取目前可用的偏移量
public kafka.javaapi.OffsetResponse getOffsetsBefore(request: OffsetRequest);

關閉連接
public void close();

對於大部分應用來說,高層API就已經足夠使用了,但是若是想做更進一步的控制的話,可以使用簡單的API,例如消費者重啓的情況下,希望得到最新的offset,就該使用SimpleConsumer.

四 KAFKA HADOOP CONSUMER API

提供了一個可水平伸縮的解決方案來結合hadoop的使用參見

https://github.com/linkedin/camus/tree/camus-kafka-0.8/

五 實戰

maven依賴:

<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.10</artifactId>
<version>0.8.0</version>
</dependency>

生產者代碼:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;
 
import java.util.Properties;
 
/**
 * <pre>
 * Created by zhaoming on 14-5-4 下午3:23
 * </pre>
 */
public class KafkaProductor {
 
public static void main(String[] args) throws InterruptedException {
 
Properties properties = new Properties();
 properties.put("zk.connect", "127.0.0.1:2181");
 properties.put("metadata.broker.list", "localhost:9092");
 
properties.put("serializer.class", "kafka.serializer.StringEncoder");
 
ProducerConfig producerConfig = new ProducerConfig(properties);
 Producer<String, String> producer = new Producer<String, String>(producerConfig);
 
// 構建消息體
 KeyedMessage<String, String> keyedMessage = new KeyedMessage<String, String>("test-topic", "test-message");
 producer.send(keyedMessage);
 
Thread.sleep(1000);
 
producer.close();
 }
 
}

消費端代碼

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
import java.io.UnsupportedEncodingException;
import java.util.List;
import java.util.Properties;
import java.util.concurrent.TimeUnit;
 
import kafka.consumer.*;
import kafka.javaapi.consumer.ConsumerConnector;
import kafka.message.MessageAndMetadata;
 
import org.apache.commons.collections.CollectionUtils;
 
/**
 * <pre>
 * Created by zhaoming on 14-5-4 下午3:32
 * </pre>
 */
public class kafkaConsumer {
 
public static void main(String[] args) throws InterruptedException, UnsupportedEncodingException {
 
Properties properties = new Properties();
 properties.put("zookeeper.connect", "127.0.0.1:2181");
 properties.put("auto.commit.enable", "true");
 properties.put("auto.commit.interval.ms", "60000");
 properties.put("group.id", "test-group");
 
ConsumerConfig consumerConfig = new ConsumerConfig(properties);
 
ConsumerConnector javaConsumerConnector = Consumer.createJavaConsumerConnector(consumerConfig);
 
 //topic的過濾器
 Whitelist whitelist = new Whitelist("test-topic");
 List<KafkaStream<byte[], byte[]>> partitions = javaConsumerConnector.createMessageStreamsByFilter(whitelist);
 
if (CollectionUtils.isEmpty(partitions)) {
 System.out.println("empty!");
 TimeUnit.SECONDS.sleep(1);
 }
 
//消費消息
 for (KafkaStream<byte[], byte[]> partition : partitions) {
 
ConsumerIterator<byte[], byte[]> iterator = partition.iterator();
 while (iterator.hasNext()) {
 MessageAndMetadata<byte[], byte[]> next = iterator.next();
 System.out.println("partiton:" + next.partition());
 System.out.println("offset:" + next.offset());
 System.out.println("message:" + new String(next.message(), "utf-8"));
 }
 
}
 
}
}

發佈了144 篇原創文章 · 獲贊 75 · 訪問量 182萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章