帶着問題擼源碼系列-zookeeper-客戶端發寫請求給follower,是轉發給leader寫?

問題

帶着問題擼源碼系列-zookeeper-客戶端發寫請求給follower,是轉發給leader寫?

猜測

是要轉發給leader寫

預讀源碼

首先得看下在哪接受請求呢?
由於之前在
帶着問題擼源碼系列-zookeeper-客戶端怎麼給sever發請求 有看見clientCnxn,所以我們猜測接收請求是在ServerCnxn

BTW, Cnxn是什麼鬼?😂

 /**
 * Interface to a Server connection - represents a connection from a client
 * to the server.
 */

serverCnxn是一個server connection的接口,代表一個從client到server的連接。
看這個類的函數裏有發現一個process函數:org.apache.zookeeper.server.ServerCnxn#process

那我們就來看這個類的實現吧,好像也沒看到啥東西。。。

線索斷了。。

找下別的線索,回到server的main函數,選舉結束後server在幹嘛?
回顧: 帶着問題擼源碼系列-zookeeper-源碼調試選舉過程 附mac環境搭建

我們現在是要看follower在幹嘛,所以我們看following狀態的代碼:
org.apache.zookeeper.server.quorum.QuorumPeer#run
org.apache.zookeeper.server.quorum.Follower#followLeader

通讀了一下代碼,我們猜測是這塊
在這裏插入圖片描述
但是進去也貌似只能看見跟leader的通信,並沒有看到接受客戶端的請求

再找別的線索,接受一個請求的時候打了什麼日誌?

老套路:搭建環境:帶着問題擼源碼系列-zookeeper-源碼調試選舉過程 附mac環境搭建

然後用客戶端給server1(FOLLOWER)發請求 create /test1

2020-05-13 14:56:35,346 [myid:1] - INFO  [CommitProcessor:1:LearnerSessionTracker@116] - Committing global session 0x10015cb31660003

在這裏插入圖片描述
哈哈哈 這不就把相關的代碼找出來了 : org.apache.zookeeper.server.quorum.LearnerSessionTracker#commitSession

先通讀下 CommitProcessor的註釋

/**
 * This RequestProcessor matches the incoming committed requests with the
 * locally submitted requests. The trick is that locally submitted requests that
 * change the state of the system will come back as incoming committed requests,
 * so we need to match them up. Instead of just waiting for the committed requests,
 * we process the uncommitted requests that belong to other sessions.
 *
 * The CommitProcessor is multi-threaded. Communication between threads is
 * handled via queues, atomics, and wait/notifyAll synchronized on the
 * processor. The CommitProcessor acts as a gateway for allowing requests to
 * continue with the remainder of the processing pipeline. It will allow many
 * read requests but only a single write request to be in flight simultaneously,
 * thus ensuring that write requests are processed in transaction id order.
 *
 *   - 1   commit processor main thread, which watches the request queues and
 *         assigns requests to worker threads based on their sessionId so that
 *         read and write requests for a particular session are always assigned
 *         to the same thread (and hence are guaranteed to run in order).
 *   - 0-N worker threads, which run the rest of the request processor pipeline
 *         on the requests. If configured with 0 worker threads, the primary
 *         commit processor thread runs the pipeline directly.
 *
 * Typical (default) thread counts are: on a 32 core machine, 1 commit
 * processor thread and 32 worker threads.
 *
 * Multi-threading constraints:
 *   - Each session's requests must be processed in order.
 *   - Write requests must be processed in zxid order
 *   - Must ensure no race condition between writes in one session that would
 *     trigger a watch being set by a read request in another session
 *
 * The current implementation solves the third constraint by simply allowing no
 * read requests to be processed in parallel with write requests.
 */

發現這其實是最後提交的時候的操作了,貌似不是一開始接受寫請求的地方。。

再出招:開啓debug日誌:
在這裏插入圖片描述

這下再請求,日誌就多了:

大概流程:

前幾個都是很多的ping請求
然後有個create請求,就是我們客戶端發的create請求觸發的一堆請求。那我們就可以從第一個create請求開始打斷點看代碼:
FollowerRequestProcessor:1:CommitProcessor@607

在這裏插入圖片描述
寫請求是needCommit的,於是就進來了

可以看到是直接加進了隊列。所以肯定有線程從這個隊列取數據然後處理:

這裏會從隊列裏取數據,把斷點打在這兒。
在這裏插入圖片描述

當我們客戶端發送create /t5的時候,type=OpCode=create,走到了下面這個斷點
在這裏插入圖片描述

看註釋,這就把數據發給了leader了

回答問題

客戶端的確把寫請求轉發給leader了

在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章