kafka源碼---運行環境搭建

目錄

介紹:

環境搭建:

一  下載源碼。

1.源碼下載: 

2.配置工程:

二  運行源碼

1.啓動kafka

 2.搭建zookeeper

3.運行kafka 和zookeeper

4.創建producer發送一條消息


 

介紹:

    閱讀優秀開源項目源碼,學習源碼中的一些設計和編程技巧,思考他們的設計目的,有助於我們提升自己技術、代碼能力,在工作開發中,完全可以借鑑這些優秀的設計思想去應用到我們自己的業務中:

kafka是scala語言開發的項目,和java語言很類似,不需要學習sacla語言基本上都可以讀懂,我們可以把kafka當成自己工作中的一個web項目。kafka中大量的用到了jdk併發類,比如集合,併發包下的類,網絡nio,文件存儲,kafka也對調度線程池進行了優化,提升了任務的添加刪除性能,我們可以閱讀kafka源碼去了解和學習這些知識。第一件事就是搭建源碼,因爲使用IDEA搭建kafka源碼比較方便快捷,下面主要介紹如何使用IDEA 開發工具搭建。

環境搭建:

一  下載源碼。

1.源碼下載: 

從GitHub找到kafka,然後使用IDEA clone 到本地 https://github.com/apache/kafka.git。下載完之後,就是下面這樣了,

下載完之後在IDEA setting--Plugins  處安裝Scala 插件,再按照提示下載scala jar包。這裏有一點需要注意,kafka使用的是gradle構建工具。

 

2.配置工程:

二  運行源碼

1.啓動kafka

這裏我們選擇的是kafka 0.11版本(0.11這個版本功能比較全了,包含producer的事務和冪等api)。Kafka.scala是kafka的啓動類,因此運行個類就可以把kafka運行起來,直接點擊run運行之後會報錯,按照如下配置,指定配置文件路徑,就可以啓動起來了。

Program arguments 指定配置文件路徑:config/server.properties

vm o'ptions 指定log4j配置路徑:-Dlog4j.configuration=file:D:/myProject/kafka/config/log4j.properties

 

 2.搭建zookeeper

kafka依賴zookeeper進行集羣元數據管理,kafka啓動會無法連接zookeeper,因此我們還得安裝zookeeper。同樣,也是從github clone zookeeper(https://github.com/apache/zookeeper.git)源碼到IDEA,然後配置運行(這樣做我們可以直接debug看kafka和zookeeper是如何交互的)。zookeeper的啓動類是QuorumPeerMain;配置如下:

 

3.運行kafka 和zookeeper

再次啓動zookeeper,和kafka,可以看到如下日誌就算啓動成功了.

Zookeeper日誌

2020-04-05 16:25:41,659 [myid:] - INFO  [main:QuorumPeerConfig@133] - Reading configuration from: D:\myProject\zookeeper\conf\zoo.cfg
2020-04-05 16:25:41,663 [myid:] - INFO  [main:QuorumPeerConfig@385] - clientPortAddress is 0.0.0.0/0.0.0.0:2181
2020-04-05 16:25:41,663 [myid:] - INFO  [main:QuorumPeerConfig@389] - secureClientPort is not set
2020-04-05 16:25:41,667 [myid:1] - INFO  [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3
2020-04-05 16:25:41,669 [myid:1] - INFO  [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0
2020-04-05 16:25:41,669 [myid:1] - INFO  [main:DatadirCleanupManager@101] - Purge task is not scheduled.
2020-04-05 16:25:41,669 [myid:1] - WARN  [main:QuorumPeerMain@124] - Either no config or no quorum defined in config, running  in standalone mode
2020-04-05 16:25:41,670 [myid:1] - INFO  [main:ManagedUtil@46] - Log4j found with jmx enabled.
2020-04-05 16:25:41,718 [myid:1] - INFO  [main:QuorumPeerConfig@133] - Reading configuration from: D:\myProject\zookeeper\conf\zoo.cfg
2020-04-05 16:25:41,718 [myid:1] - INFO  [main:QuorumPeerConfig@385] - clientPortAddress is 0.0.0.0/0.0.0.0:2181
2020-04-05 16:25:41,718 [myid:1] - INFO  [main:QuorumPeerConfig@389] - secureClientPort is not set
2020-04-05 16:25:41,719 [myid:1] - INFO  [main:ZooKeeperServerMain@116] - Starting server
2020-04-05 16:25:41,738 [myid:1] - INFO  [main:Environment@109] - Server environment:zookeeper.version=3.5.6-SNAPSHOT-3882a0171f91280bf1adbbd4ffaeb17cb5131316, built on 08/03/2019 02:09 GMT
2020-04-05 16:25:41,738 [myid:1] - INFO  [main:Environment@109] - Server environment:host.name=windows10.microdone.cn
Kafka日誌
[2020-04-05 16:50:49,065] INFO KafkaConfig values: 
	advertised.host.name = null
	advertised.listeners = null
	advertised.port = null
	alter.config.policy.class.name = null
	authorizer.class.name = 
	auto.create.topics.enable = true
	auto.leader.rebalance.enable = true
	background.threads = 10
	broker.id = 0
	broker.id.generation.enable = true
	broker.rack = null
	compression.type = producer

4.創建producer發送一條消息

 public static void main(String[] args) throws ExecutionException, InterruptedException {
        Properties properties=new Properties();
        properties.put("bootstrap.servers","localhost:9092");
        properties.put("key.serializer","org.apache.kafka.common.serialization.StringSerializer");
        properties.put("value.serializer","org.apache.kafka.common.serialization.StringSerializer");
        properties.put("acks","-1");
//        properties.put("enable.idempotence",true);
        properties.put("batch.size","1");
        KafkaProducer<String,String>  producer=new KafkaProducer(properties);
        ProducerRecord record = new ProducerRecord("topic-test-1", "test-2", "value");
        Future<RecordMetadata> value = producer.send(record);
        RecordMetadata recordMetadata = value.get();
        producer.flush();
        System.out.println("ProducerDemo send result ,  topic="+recordMetadata.topic()+" ,partition="
                +recordMetadata.partition()+", offset="+recordMetadata.offset());

可以看到kafka已經成功了接受到了消息:

17:06:57.151 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name topic.topic-test-1.record-retries
17:06:57.151 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name topic.topic-test-1.record-errors
ProducerDemo send result ,  topic=topic-test-1 ,partition=0, offset=2

這裏只是簡單介紹了一下如何搭建並運行,並沒有講的這麼細。下面會着重介紹kafka的整個架構體系,源碼。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章