Java程序員說:世界上有三個偉大的發明【火、輪子、kafka】

{"type":"doc","content":[{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"一、Kafka 是什麼?","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"有人說世界上有三個偉大的發明:火,輪子,以及 Kafka。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"發展到現在,Apache Kafka 無疑是很成功的,Confluent 公司曾表示世界五百強中有三分之一的企業在使用 Kafka。在流式計算中,Kafka 一般用來緩存數據,例如 Flink 通過消費 Kafka 的數據進行計算。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"關於Kafka,我們最先需要了解的是以下四點:","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/8c/8c49c9e6b7b0aead3f83e9ea650e5d93.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" ","attrs":{}}]},{"type":"numberedlist","attrs":{"start":"1","normalizeStart":1},"content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"Apache Kafka 是一個開源 ","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「消息」","attrs":{}},{"type":"text","text":" 系統,由 Scala 寫成。是由 Apache 軟件基金會開發的 一個開源消息系統項目。","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"Kafka 最初是由 LinkedIn 公司開發,用作 LinkedIn 的活動流(Activity Stream)和運營數據處理管道(Pipeline)的基礎,現在它已被多家不同類型的公司作爲多種類型的數據管道和消息系統使用。","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「Kafka 是一個分佈式消息隊列」","attrs":{}},{"type":"text","text":"。Kafka 對消息保存時根據 Topic 進行歸類,發送消息 者稱爲 Producer,消息接受者稱爲 Consumer,此外 kafka 集羣有多個 kafka 實例組成,每個 實例(server)稱爲 broker。","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null},"content":[{"type":"text","text":"無論是 kafka 集羣,還是 consumer 都依賴於 ","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「Zookeeper」","attrs":{}},{"type":"text","text":" 集羣保存一些 meta 信息, 來保證系統可用性。","attrs":{}}]}],"attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"二、爲什麼要有 Kafka?","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「kafka」","attrs":{}},{"type":"text","text":" 之所以受到越來越多的青睞,與它所扮演的三大角色是分不開的的:","attrs":{}}]},{"type":"bulletedlist","content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「消息系統」","attrs":{}},{"type":"text","text":":kafka與傳統的消息中間件都具備系統解耦、冗餘存儲、流量削峯、緩衝、異步通信、擴展性、可恢復性等功能。與此同時,kafka還提供了大多數消息系統難以實現的消息順序性保障及回溯性消費的功能。","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「存儲系統」","attrs":{}},{"type":"text","text":":kafka把消息持久化到磁盤,相比於其他基於內存存儲的系統而言,有效的降低了消息丟失的風險。這得益於其消息持久化和多副本機制。也可以將kafka作爲長期的存儲系統來使用,只需要把對應的數據保留策略設置爲“永久”或啓用主題日誌壓縮功能。","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「流式處理平臺」","attrs":{}},{"type":"text","text":":kafka爲流行的流式處理框架提供了可靠的數據來源,還提供了一個完整的流式處理框架,比如窗口、連接、變換和聚合等各類操作。","attrs":{}}]}],"attrs":{}}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Kafka特性分佈式具備經濟、快速、可靠、易擴充、數據共享、設備共享、通訊方便、靈活等,分佈式所具備的特性高吞吐量同時爲數據生產者和消費者提高吞吐量高可靠性支持多個消費者,當某個消費者失敗的時候,能夠自動負載均衡離線能將消息持久化,進行批量處理解耦作爲各個系統連接的橋樑,避免系統之間的耦合","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"三、Kafka 基本概念","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在深入理解 Kafka 之前,可以先了解下 Kafka 的基本概念。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"一個典型的 Kafka 包含若干Producer、若干 Broker、若干 Consumer 以及一個 Zookeeper 集羣。Zookeeper 是 Kafka 用來負責集羣元數據管理、控制器選舉等操作的。Producer 是負責將消息發送到 Broker 的,Broker 負責將消息持久化到磁盤,而 Consumer 是負責從Broker 訂閱並消費消息。Kafka體系結構如下所示:","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/9b/9b5411c9c196d302c918196b04f7ba15.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"概念一:生產者(Producer)與消費者(Consumer)","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/44/440192680a5632a85dc67decc4238c7a.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"生產者和消費者","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"對於 Kafka 來說客戶端有兩種基本類型:","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「生產者」","attrs":{}},{"type":"text","text":"(Producer)和 ","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「消費者」","attrs":{}},{"type":"text","text":"(Consumer)。除此之外,還有用來做數據集成的 Kafka Connect API 和流式處理的","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「Kafka Streams」","attrs":{}},{"type":"text","text":" 等高階客戶端,但這些高階客戶端底層仍然是生產者和消費者API,只不過是在上層做了封裝。","attrs":{}}]},{"type":"bulletedlist","content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「Producer」","attrs":{}},{"type":"text","text":" :消息生產者,就是向 Kafka broker 發消息的客戶端;","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「Consumer」","attrs":{}},{"type":"text","text":" :消息消費者,向 Kafka broker 取消息的客戶端;","attrs":{}}]}],"attrs":{}}],"attrs":{}},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"概念二:Broker 和集羣(Cluster)","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"一個 Kafka 服務器也稱爲 ","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「Broker」","attrs":{}},{"type":"text","text":",它接受生產者發送的消息並存入磁盤;Broker 同時服務消費者拉取分區消息的請求,返回目前已經提交的消息。使用特定的機器硬件,一個 Broker 每秒可以處理成千上萬的分區和百萬量級的消息。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"若干個 Broker 組成一個 ","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「集羣」","attrs":{}},{"type":"text","text":"(","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「Cluster」","attrs":{}},{"type":"text","text":"),其中集羣內某個 Broker 會成爲集羣控制器(Cluster Controller),它負責管理集羣,包括分配分區到 Broker、監控 Broker 故障等。在集羣內,一個分區由一個 Broker 負責,這個 Broker 也稱爲這個分區的 Leader;當然一個分區可以被複制到多個 Broker 上來實現冗餘,這樣當存在 Broker 故障時可以將其分區重新分配到其他 Broker 來負責。下圖是一個樣例:","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/f3/f38070afc792a2cd89b3cfa1d7121bbb.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Broker 和集羣(Cluster)","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"概念三:主題(Topic)與分區(Partition)","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/b6/b6ffb37e7ff461ef1464390f629b86a3.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"主題(Topic)與分區(Partition)","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在 Kafka 中,消息以 ","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「主題」","attrs":{}},{"type":"text","text":"(","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「Topic」","attrs":{}},{"type":"text","text":")來分類,每一個主題都對應一個「","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「消息隊列」","attrs":{}},{"type":"text","text":"」,這有點兒類似於數據庫中的表。但是如果我們把所有同類的消息都塞入到一個“中心”隊列中,勢必缺少可伸縮性,無論是生產者/消費者數目的增加,還是消息數量的增加,都可能耗盡系統的性能或存儲。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們使用一個生活中的例子來說明:現在 A 城市生產的某商品需要運輸到 B 城市,走的是公路,那麼單通道的高速公路不論是在「A 城市商品增多」還是「現在 C 城市也要往 B 城市運輸東西」這樣的情況下都會出現「吞吐量不足」的問題。所以我們現在引入 ","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「分區」","attrs":{}},{"type":"text","text":"(","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「Partition」","attrs":{}},{"type":"text","text":")的概念,類似“允許多修幾條道”的方式對我們的主題完成了水平擴展。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"四、Kafka 工作流程分析","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/53/5353034f4adb125f5144907fac9f2068.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"4.1 Kafka 生產過程分析","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"4.1.1 寫入方式","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"producer 採用推(push)模式將消息發佈到 broker,每條消息都被追加(append)到分區(patition)中,屬於順序寫磁盤(順序寫磁盤效率比隨機寫內存要高,保障 kafka 吞吐率)","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"4.1.2 分區(Partition)","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"消息發送時都被髮送到一個 topic,其本質就是一個目錄,而 topic 是由一些 Partition Logs(分區日誌)組成,其組織結構如下圖所示:","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/df/dfbc0144eb8ff37ae9e27b745b89d8fd.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們可以看到,每個 Partition 中的消息都是 ","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「有序」","attrs":{}},{"type":"text","text":" 的,生產的消息被不斷追加到 Partition log 上,其中的每一個消息都被賦予了一個唯一的 ","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「offset」","attrs":{}},{"type":"text","text":" 值。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「1)分區的原因」","attrs":{}}]},{"type":"numberedlist","attrs":{"start":"1","normalizeStart":1},"content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"方便在集羣中擴展,每個 Partition 可以通過調整以適應它所在的機器,而一個 topic 又可以有多個 Partition 組成,因此整個集羣就可以適應任意大小的數據了;","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"可以提高併發,因爲可以以 Partition 爲單位讀寫了。","attrs":{}}]}],"attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「2)分區的原則」","attrs":{}}]},{"type":"numberedlist","attrs":{"start":"1","normalizeStart":1},"content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"指定了 patition,則直接使用;","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"未指定 patition 但指定 key,通過對 key 的 value 進行 hash 出一個 patition;","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":"patition 和 key 都未指定,使用輪詢選出一個 patition。","attrs":{}}]}],"attrs":{}}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"DefaultPartitioner 類 \npublic int partition(String topic, Object key, byte[] keyBytes, Object value, byte[] valueBytes, Cluster cluster) { \n List partitions = cluster.partitionsForTopic(topic); \n int numPartitions = partitions.size(); \n if (keyBytes == null) {\n   int nextValue = nextValue(topic); \n   List availablePartitions = cluster.availablePartitionsForTopic(topic);\n   if (availablePartitions.size() > 0) { \n   int part = Utils.toPositive(nextValue) % availablePartitions.size(); \n   return availablePartitions.get(part).partition();\n    } else { \n    // no partitions are available, give a non-available partition \n    return Utils.toPositive(nextValue) % numPartitions; \n    } \n    } else { \n    // hash the keyBytes to choose a partition \n    return Utils.toPositive(Utils.murmur2(keyBytes)) % numPartitions; \n    }\n }\n","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"4.1.3 副本(Replication)","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"同 一 個 partition 可 能 會 有 多 個 replication ( 對 應 server.properties 配 置 中 的 default.replication.factor=N)。沒有 replication 的情況下,一旦 broker 宕機,其上所有 patition 的數據都不可被消費,同時 producer 也不能再將數據存於其上的 patition。引入 replication 之後,同一個 partition 可能會有多個 replication,而這時需要在這些 replication 之間選出一 個 leader,producer 和 consumer 只與這個 leader 交互,其它 replication 作爲 follower 從 leader 中複製數據。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"4.1.4 寫入流程","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"producer 寫入消息流程如下:","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/91/91de9b7ca38179426f6a71fd57a700ad.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1)producer 先從 zookeeper 的 \"/brokers/.../state\"節點找到該 partition 的 leader ;","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2)producer 將消息發送給該 leader ;","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"3)leader 將消息寫入本地 log ;","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"4)followers 從 leader pull 消息,寫入本地 log 後向 leader 發送 ACK ;","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"5)leader 收到所有 ISR 中的 replication 的 ACK 後,增加 HW(high watermark,最後 commit 的 offset)並向 producer 發送 ACK ;","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"4.2 Broker 保存消息","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"4.2.1 存儲方式","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"物理上把 topic 分成一個或多個 patition(對應 server.properties 中的 num.partitions=3 配 置),每個 patition 物理上對應一個文件夾(該文件夾存儲該 patition 的所有消息和索引文 件),如下:","attrs":{}}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"[root@hadoop102 logs]$ ll \ndrwxrwxr-x. 2 demo demo 4096 8 月 6 14:37 first-0 \ndrwxrwxr-x. 2 demo demo 4096 8 月 6 14:35 first-1 \ndrwxrwxr-x. 2 demo demo 4096 8 月 6 14:37 first-2 \n\n[root@hadoop102 logs]$ cd first-0 \n[root@hadoop102 first-0]$ ll \n-rw-rw-r--. 1 demo demo 10485760 8 月 6 14:33 00000000000000000000.index \n-rw-rw-r--. 1 demo demo 219 8 月 6 15:07 00000000000000000000.log \n-rw-rw-r--. 1 demo demo 10485756 8 月 6 14:33 00000000000000000000.timeindex \n-rw-rw-r--. 1 demo demo 8 8 月 6 14:37 leader-epoch-checkpoint\n","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"4.2.2  存儲策略","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"無論消息是否被消費,kafka 都會保留所有消息。有兩種策略可以刪除舊數據:","attrs":{}}]},{"type":"bulletedlist","content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基於時間:log.retention.hours=168","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基於大小:log.retention.bytes=1073741824","attrs":{}}]}],"attrs":{}}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"需要注意的是,因爲 Kafka 讀取特定消息的時間複雜度爲 O(1),即與文件大小無關, 所以這裏刪除過期文件與提高 Kafka 性能無關。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"4.2.3 Zookeeper 存儲結構","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/b6/b66721517465af49c786749e1ca1aca3.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"注意:producer 不在 zk 中註冊,消費者在 zk 中註冊。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"4.3 Kafka 消費過程分析","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"kafka 提供了兩套 consumer API:高級 Consumer API 和低級 Consumer API。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"4.3.1 高級 API","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「1)高級 API 優點」","attrs":{}}]},{"type":"bulletedlist","content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"高級 API 寫起來簡單","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"不需要自行去管理 offset,系統通過 zookeeper 自行管理。","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"不需要管理分區,副本等情況,系統自動管理。","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"消費者斷線會自動根據上一次記錄在 zookeeper 中的 offset 去接着獲取數據(默認設置 1 分鐘更新一下 zookeeper 中存的 offset)","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"可以使用 group 來區分對同一個 topic 的不同程序訪問分離開來(不同的 group 記錄不同的 offset,這樣不同程序讀取同一個 topic 纔不會因爲 offset 互相影響)","attrs":{}}]}],"attrs":{}}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「2)高級 API 缺點」","attrs":{}}]},{"type":"bulletedlist","content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"不能自行控制 offset(對於某些特殊需求來說)","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"不能細化控制如分區、副本、zk 等","attrs":{}}]}],"attrs":{}}],"attrs":{}},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"4.3.2 低級 API","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「1)低級 API 優點」","attrs":{}}]},{"type":"bulletedlist","content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"能夠讓開發者自己控制 offset,想從哪裏讀取就從哪裏讀取。","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"自行控制連接分區,對分區自定義進行負載均衡","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"對 zookeeper 的依賴性降低(如:offset 不一定非要靠 zk 存儲,自行存儲 offset 即可, 比如存在文件或者內存中)","attrs":{}}]}],"attrs":{}}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「2)低級 API 缺點」","attrs":{}}]},{"type":"bulletedlist","content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"太過複雜,需要自行控制 offset,連接哪個分區,找到分區 leader 等。","attrs":{}}]}],"attrs":{}}],"attrs":{}},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"4.3.3 消費者組","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/79/798a6c627cb84378ed9b4d0ecabe2c53.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"消費者是以 consumer group 消費者組的方式工作,由一個或者多個消費者組成一個組, 共同消費一個 topic。每個分區在同一時間只能由 group 中的一個消費者讀取,但是多個 group 可以同時消費這個 partition。在圖中,有一個由三個消費者組成的 group,有一個消費者讀取主題中的兩個分區,另外兩個分別讀取一個分區。某個消費者讀取某個分區,也可以叫做某個消費者是某個分區的擁有者。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在這種情況下,消費者可以通過水平擴展的方式同時讀取大量的消息。另外,如果一個消費者失敗了,那麼其他的 group 成員會自動負載均衡讀取之前失敗的消費者讀取的分區。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"4.3.4 消費方式","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"consumer 採用 pull(拉)模式從 broker 中讀取數據。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"push(推)模式很難適應消費速率不同的消費者,因爲消息發送速率是由 broker 決定的。它的目標是儘可能以最快速度傳遞消息,但是這樣很容易造成 consumer 來不及處理消息,典型的表現就是拒絕服務以及網絡擁塞。而 pull 模式則可以根據 consumer 的消費能力以適當的速率消費消息。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"對於 Kafka 而言,pull 模式更合適,它可簡化 broker 的設計,consumer 可自主控制消費 消息的速率,同時 consumer 可以自己控制消費方式——即可批量消費也可逐條消費,同時還能選擇不同的提交方式從而實現不同的傳輸語義。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"pull 模式不足之處是,如果 kafka 沒有數據,消費者可能會陷入循環中,一直等待數據 到達。爲了避免這種情況,我們在我們的拉請求中有參數,允許消費者請求在等待數據到達 的“長輪詢”中進行阻塞(並且可選地等待到給定的字節數,以確保大的傳輸大小)。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"五、Kafka 安裝","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"5.1 安裝環境與前提條件","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"安裝環境:Linux","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"前提條件:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Linux系統下安裝好jdk 1.8以上版本,正確配置環境變量 Linux系統下安裝好scala 2.11版本","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"安裝ZooKeeper(注:kafka自帶一個Zookeeper服務,如果不單獨安裝,也可以使用自帶的ZK)","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"5.2 安裝步驟","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Apache基金會開源的這些軟件基本上安裝都比較方便,只需要下載、解壓、配置環境變量三步即可完成,kafka也一樣,官網選擇對應版本下載後直接解壓到一個安裝目錄下就可以使用了,如果爲了方便可以在~/.bashrc裏配置一下環境變量,這樣使用的時候就不需要每次都切換到安裝目錄了。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"具體可參考:Kafka 集羣安裝與環境測試","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"5.3 測試","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"接下來可以通過簡單的console窗口來測試kafka是否安裝正確。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「(1)首先啓動ZooKeeper服務」","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如果啓動自己安裝的ZooKeeper,使用命令zkServer.sh start即可。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如果使用kafka自帶的ZK服務,啓動命令如下(啓動之後shell不會返回,後續其他命令需要另開一個Terminal):","attrs":{}}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"$ cd /opt/tools/kafka #進入安裝目錄\n$ bin/zookeeper-server-start.sh config/zookeeper.properties\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「(2)第二步啓動kafka服務」","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"啓動Kafka服務的命令如下所示:","attrs":{}}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"$ cd /opt/tools/kafka #進入安裝目錄\n$ bin/kafka-server-start.sh config/server.properties\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「(3)第三步創建一個topic,假設爲“test”」","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"創建topic的命令如下所示,其參數也都比較好理解,依次指定了依賴的ZooKeeper,副本數量,分區數量,topic的名字:","attrs":{}}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"$ cd /opt/tools/kafka #進入安裝目錄\n$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test1\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"創建完成後,可以通過如下所示的命令查看topic列表:","attrs":{}}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"$ bin/kafka-topics.sh --list --zookeeper localhost:2181 \n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"「(4)開啓Producer和Consumer服務」","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"kafka提供了生產者和消費者對應的console窗口程序,可以先通過這兩個console程序來進行驗證。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"首先啓動Producer:","attrs":{}}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"$ cd /opt/tools/kafka #進入安裝目錄\n$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"然後啓動Consumer:","attrs":{}}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"$ cd /opt/tools/kafka #進入安裝目錄\n$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在打開生產者服務的終端輸入一些數據,回車後,在打開消費者服務的終端能看到生產者終端輸入的數據,即說明kafka安裝成功。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"六、Apache Kafka 簡單示例","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"6.1 創建消息隊列","attrs":{}}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"kafka-topics.sh --create --zookeeper 192.168.56.137:2181 --topic test --replication-factor 1 --partitions 1\n","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"6.2 pom.xml","attrs":{}}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"\n\n    org.apache.kafka\n    kafka-clients\n    2.1.1\n\n","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"6.3 生產者","attrs":{}}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"package com.njbdqn.services;\n\nimport org.apache.kafka.clients.producer.KafkaProducer;\nimport org.apache.kafka.clients.producer.ProducerConfig;\nimport org.apache.kafka.clients.producer.ProducerRecord;\nimport org.apache.kafka.common.serialization.StringSerializer;\n\nimport java.util.Properties;\n\n/**\n * @author:Tokgo J\n * @date:2020/9/11\n * @aim:生產者:往test消息隊列寫入消息\n */\n\npublic class MyProducer {\n    public static void main(String[] args) {\n        // 定義配置信息\n        Properties prop = new Properties();\n        // kafka地址,多個地址用逗號分割  \"192.168.23.76:9092,192.168.23.77:9092\"\n        prop.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,\"192.168.56.137:9092\");\n        prop.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);\n        prop.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class);\n        KafkaProducer prod = new KafkaProducer(prop);\n\n        // 發送消息\n        try {\n            for(int i=0;i<10;i++) {\n                // 生產者記錄消息\n                ProducerRecord pr = new ProducerRecord(\"test\", \"hello world\"+i);\n                prod.send(pr);\n                Thread.sleep(500);\n            }\n        } catch (InterruptedException e) {\n            e.printStackTrace();\n        } finally {\n            prod.close();\n        }\n    }\n}\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"注意:","attrs":{}}]},{"type":"numberedlist","attrs":{"start":"1","normalizeStart":1},"content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"kafka如果是集羣,多個地址用逗號分割(,) ;","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"Properties的put方法,第一個參數可以是字符串,如:p.put(\"bootstrap.servers\",\"192.168.23.76:9092\") ;","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":"kafkaProducer.send(record)可以通過返回的Future來判斷是否已經發送到kafka,增強消息的可靠性。同時也可以使用send的第二個參數來回調,通過回調判斷是否發送成功;","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null},"content":[{"type":"text","text":"p.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class); 設置序列化類,可以寫類的全路徑。","attrs":{}}]}],"attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"6.4 消費者","attrs":{}}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"package com.njbdqn.services;\n\nimport org.apache.kafka.clients.consumer.ConsumerConfig;\nimport org.apache.kafka.clients.consumer.ConsumerRecord;\nimport org.apache.kafka.clients.consumer.ConsumerRecords;\nimport org.apache.kafka.clients.consumer.KafkaConsumer;\nimport org.apache.kafka.common.serialization.StringDeserializer;\nimport org.apache.kafka.common.serialization.StringSerializer;\n\nimport java.time.Duration;\nimport java.util.Arrays;\nimport java.util.Collections;\nimport java.util.Properties;\n\n/**\n * @author:Tokgo J\n * @date:2020/9/11\n * @aim:消費者:讀取kafka數據\n */\n\npublic class MyConsumer {\n    public static void main(String[] args) {\n        Properties prop = new Properties();\n        prop.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, \"192.168.56.137:9092\");\n        prop.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);\n        prop.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);\n\n        prop.put(\"session.timeout.ms\", \"30000\");\n        //消費者是否自動提交偏移量,默認是true 避免出現重複數據 設爲false\n        prop.put(\"enable.auto.commit\", \"false\");\n        prop.put(\"auto.commit.interval.ms\", \"1000\");\n        //auto.offset.reset 消費者在讀取一個沒有偏移量的分區或者偏移量無效的情況下的處理\n        //earliest 在偏移量無效的情況下 消費者將從起始位置讀取分區的記錄\n        //latest 在偏移量無效的情況下 消費者將從最新位置讀取分區的記錄\n        prop.put(\"auto.offset.reset\", \"earliest\");\n\n        // 設置組名\n        prop.put(ConsumerConfig.GROUP_ID_CONFIG, \"group\");\n\n        KafkaConsumer con = new KafkaConsumer(prop);\n\n        con.subscribe(Collections.singletonList(\"test\"));\n\n        while (true) {\n            ConsumerRecords records = con.poll(Duration.ofSeconds(100));\n            for (ConsumerRecord rec : records) {\n                System.out.println(String.format(\"offset:%d,key:%s,value:%s\", rec.offset(), rec.key(), rec.value()));\n\n            }\n        }\n    }\n}\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"注意:","attrs":{}}]},{"type":"numberedlist","attrs":{"start":"1","normalizeStart":1},"content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"訂閱消息可以訂閱多個主題;","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"ConsumerConfig.GROUP_ID_CONFIG 表示消費者的分組,kafka根據分組名稱判斷是不是同一組消費者,同一組消費者去消費一個主題的數據的時候,數據將在這一組消費者上面輪詢;","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":"主題涉及到分區的概念,同一組消費者的個數不能大於分區數。因爲:一個分區只能被同一羣組的一個消費者消費。出現分區小於消費者個數的時候,可以動態增加分區;","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null},"content":[{"type":"text","text":"注意和生產者的對比,Properties中的key和value是反序列化,而生產者是序列化。","attrs":{}}]}],"attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"七、參考","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"朱小廝:《深入理解Kafka:核心設計與實踐原理》","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"宇宙灣:《Apache Kafka 分佈式消息隊列框架》","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"需要上述參考資料或者是想更多kafka相關參考資料的讀者可以關注公衆號【Java鬥帝】回覆666 即可免費獲取","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" ","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"看完三件事❤️","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"========","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如果你覺得這篇內容對你還蠻有幫助,我想邀請你幫我三個小忙:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 點贊,轉發,有你們的 『點贊和評論』,纔是我創造的動力。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 關注公衆號 『 Java鬥帝 』,不定期分享原創知識。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 同時可以期待後續文章ing🚀","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章