在寫etcd-raft的leader選舉的那篇文章時,每次牽扯到消息接收和發送都用”接收到消息”、“把消息發送”出去這樣的字眼給代替了,感覺有那麼一點的彆扭,這節主要描述下etcd-raft的消息發送和接收。
先附上一副關鍵struct類成員關係圖,先對這個交互複雜度在心裏有個數^_^
在介紹信息交互流程之前先介紹幾個比較關鍵的結構體類型以及結構體裏面關鍵嵌入成員:
EtcdServer :
// etcd-2.3.7/etcdserver/server.go
type EtcdServer struct {
// etcdserver中用於與raft模塊交互的接口
r raftNode
// ...
}
raftNode:
// etcd-2.3.7/etcdserver/raft.go
type raftNode struct {
//...
// 指向raft.node,用於用於raft協議相關消息的處理和生成
raft.Node
// ...
// transport specifies the transport to send and receive msgs to members.
// 最終指向rafthttp.Transport
transport rafthttp.Transporter
// ...
}
raft.node:
// etcd-2.3.7/raft/node.go
// node is the canonical implementation of the Node interface
type node struct {
// 從peer端收到的消息會通過recvc傳給raft算法處理
// 從客戶端收到的寫請求會通過propc傳給raft算法處理
propc chan pb.Message
recvc chan pb.Message
// ...
// 通過readyc channel把已經被raft處理完畢待發送的消息傳給raftNode,raftNode在start會select到r.ready()上
readyc chan Ready
// ...
}
rafthttp.Transport:
// etcd-2.3.7/rafthttp/transport.go
type Transport struct {
// 當前peer連接通過streamReader連接其他peer的超時時間
DialTimeout time.Duration // maximum duration before timing out dial of the request
// ...
// 當前raft節點的id,以及url
ID types.ID // local member ID
URLs types.URLs // local peer URLs
// ...
// 指向EtcdServer
Raft Raft // raft state machine, to which the Transport forwards received messages and reports status
// ...
// 每個peer代表一個集羣中的raft節點,peer之間通過streamReader以及streamWriter相互通信,最終直到一個peer對象
peers map[types.ID]Peer // peers map
// ...
}
peer:
// etcd-2.3.7/rafthttp/peer.go
type peer struct {
// id of the remote raft peer node
id types.ID
// 指向EtcdServer,用於在收到消息時把消息內容傳給EtcdServer的Process函數
r Raft
// ...
// msgAppV2Writer與writer等待peer的連接,用於用於發送消息
msgAppV2Writer *streamWriter
writer *streamWriter
// 發送snap時,文件內容比較多時纔會用到
pipeline *pipeline
// ...
sendc chan raftpb.Message
// streamReader從該peer的連接中讀到非寫請求消息時會通過recvc發送給當前peer對象的宿主goroutine,然後通過peer.r.Process傳給EtcdServer
recvc chan raftpb.Message
// streamReader從該peer的連接中讀到slave轉發給master的寫請求
propc chan raftpb.Message
// ...
}
streamReader:
// etcd-2.3.7/rafthttp/stream.go
type streamReader struct {
// ...
// recvc以及propc與peer中同名變量指向同一個對象,是peer在startPeer中創建streamReader傳入的
recvc chan<- raftpb.Message
propc chan<- raftpb.Message
// ...
}
streamWriter:
// streamWriter writes messages to the attached outgoingConn.
type streamWriter struct {
// ...
// 指向EtcdServer
r Raft
// ...
// 對外提供接口函數用於讓其他類往msgc中寫入需要發送給該peer的消息,然後streamWriter從msgc中接收消息併發送到peer
msgc chan raftpb.Message
// 與peer建立連接的conn
connc chan *outgoingConn
// ...
}
與peer通信相關的基本類主要有以上幾個,下面分別介紹peer交互的各個步驟:消息的發送,消息的接收。
消息的發送
raft算法模塊發送消息的入口是raft.send,相關代碼如下:
// etcd-2.3.7/raft/raft.go
func (r *raft) send(m pb.Message) {
m.From = r.id
// do not attach term to MsgProp
// proposals are a way to forward to the leader and
// should be treated as local message.
if m.Type != pb.MsgProp {
m.Term = r.Term
}
// 把消息追加msgs消息隊列中
r.msgs = append(r.msgs, m)
}
raft.send中處理完畢需要發送的消息都放到了msgs消息隊列中了,消息的讀取主要是在raft.node.run中,讀取到消息構造成ready包,然後通過自己的readyc管道發送出去:
// etcd-2.3.7/raft/node.go
func (n *node) run(r *raft) {
var propc chan pb.Message
var readyc chan Ready
var advancec chan struct{}
var prevLastUnstablei, prevLastUnstablet uint64
var havePrevLastUnstablei bool
var prevSnapi uint64
var rd Ready
lead := None
prevSoftSt := r.softState()
prevHardSt := emptyState
for {
if advancec != nil {
readyc = nil
} else {
// 如果msgs隊列中有消息就構造ready請求
rd = newReady(r, prevSoftSt, prevHardSt)
if rd.containsUpdates() {
readyc = n.readyc
} else {
readyc = nil
}
}
// ...
select {
// ...
case readyc <- rd:
if rd.SoftState != nil {
prevSoftSt = rd.SoftState
}
if len(rd.Entries) > 0 {
prevLastUnstablei = rd.Entries[len(rd.Entries)-1].Index
prevLastUnstablet = rd.Entries[len(rd.Entries)-1].Term
havePrevLastUnstablei = true
}
if !IsEmptyHardState(rd.HardState) {
prevHardSt = rd.HardState
}
if !IsEmptySnap(rd.Snapshot) {
prevSnapi = rd.Snapshot.Metadata.Index
}
r.msgs = nil
advancec = n.advancec
case //...
} // select
} // for
}
node在run函數裏面把消息的傳到了自己readyc中,由於raft.node是在raftNode中創建,因此從readyc中讀取消息是在raftNode中,raftNode的goroutine相關代碼在raftNode.start裏面,由於ready消息裏面不僅有待發送的消息,還有從其他peer接收的消息,比如slave從master接收的寫請求,畢竟etcd的kv存儲最終需要EtcdServer去管理的、master上已經被大多數節點寫入的請求,需要在本地commit、待發送的消息等等,這些都會通過ready傳回,這裏我們只保留msg部分:
// etcd-2.3.7/etcdserver/raft.go
func (r *raftNode) start(s *EtcdServer) {
r.s = s
// ...
go func() {
var syncC <-chan time.Time
defer r.onStop()
islead := false
for {
select {
case <-r.ticker:
r.Tick()
case rd := <-r.Ready():
// ...
if islead {
//最終會調用EtcdServer的send函數把消息發出去
r.s.send(rd.Messages)
}
// ...
if !islead {
//最終會調用EtcdServer的send函數把消息發出去
r.s.send(rd.Messages)
}
raftDone <- struct{}{}
r.Advance()
case <-syncC:
r.s.sync(r.s.cfg.ReqTimeout())
case <-r.stopped:
return
}
}
}()
}
通過上面的代碼可知最終會調用EtcdServer的send函數把消息發出去,EtcdServer.send的代碼如下:
// etcd-2.3.7/etcdserver/server.go
func (s *EtcdServer) send(ms []raftpb.Message) {
// ...
s.r.transport.Send(ms)
}
通過分析EtcdServer的send函數我們可知,該函數有進一步調用了Transport.Send函數,畢竟用於peer間相互交互的對象都是在Transport的管理之下,Transport.Send會循環的發送消息,沒法送一條消息是都會通過消息的目標peer id找到關聯的peer對象,然後藉助於peer.send繼續把待發送消息向下傳遞:
// etcd-2.3.7/rafthttp/transport.go
unc (t *Transport) Send(msgs []raftpb.Message) {
for _, m := range msgs {
if m.To == 0 {
// ignore intentionally dropped message
continue
}
to := types.ID(m.To)
t.mu.RLock()
p, ok := t.peers[to]
t.mu.RUnlock()
if ok {
if m.Type == raftpb.MsgApp {
t.ServerStats.SendAppendReq(m.Size())
}
p.send(m)
continue
}
// ...
}
}
peer.send的代碼如下:
// etcd-2.3.7/rafthttp/peer.go
func (p *peer) send(m raftpb.Message) {
// ...
// 通過pick取出消息應該給那個streamWriter,或則發送snap時通過pipeline
writec, name := p.pick(m)
select {
case writec <- m:
// ...
}
}
func (p *peer) pick(m raftpb.Message) (writec chan<- raftpb.Message, picked string) {
var ok bool
// Considering MsgSnap may have a big size, e.g., 1G, and will block
// stream for a long time, only use one of the N pipelines to send MsgSnap.
if isMsgSnap(m) {
return p.pipeline.msgc, pipelineMsg
} else if writec, ok = p.msgAppV2Writer.writec(); ok && isMsgApp(m) {
return writec, streamAppV2
} else if writec, ok = p.writer.writec(); ok {
return writec, streamMsg
}
return p.pipeline.msgc, pipelineMsg
}
streamWriter的writec()函數最終返回的是streamWriter.msgc管道,然後streamWriter通過在發送goroutine中循環的讀取msgc管道,拿到待發送的消息,最終通過encoder把消息發送到對端 ,streamHandler何時被設置爲peer請求的Handler可以參考etcd 2.3.7 啓動流程分析:
// etcd-2.3.7/rafthttp/stream.go
func (cw *streamWriter) run() {
var (
msgc chan raftpb.Message
heartbeatc <-chan time.Time
t streamType
enc encoder
flusher http.Flusher
batched int
)
tickc := time.Tick(ConnReadTimeout / 3)
for {
select {
// ...
case m := <-msgc:
start := time.Now()
err := enc.encode(m)
if err == nil {
if len(msgc) == 0 || batched > streamBufSize/2 {
flusher.Flush()
batched = 0
} else {
batched++
}
reportSentDuration(string(t), m, time.Since(start))
continue
}
reportSentFailure(string(t), m)
cw.status.deactivate(failureType{source: t.String(), action: "write"}, err.Error())
cw.close()
heartbeatc, msgc = nil, nil
cw.r.ReportUnreachable(m.To)
// encoder的建立主要是基於原始Tcp Conn,通過connc管道傳送到streamWriter goroutine,該連接實在streamHandler.ServeHTTP裏面被建立。
case conn := <-cw.connc:
cw.close()
t = conn.t
switch conn.t {
case streamTypeMsgAppV2:
enc = newMsgAppV2Encoder(conn.Writer, cw.fs)
case streamTypeMessage:
enc = &messageEncoder{w: conn.Writer}
default:
plog.Panicf("unhandled stream type %s", conn.t)
}
flusher = conn.Flusher
cw.mu.Lock()
cw.status.activate()
cw.closer = conn.Closer
cw.working = true
cw.mu.Unlock()
heartbeatc, msgc = tickc, cw.msgc
case <-cw.stopc:
cw.close()
close(cw.done)
return
}
}
消息接收
上面在進行關鍵類的介紹時說過streamReader用於與peer建立連接並從連接上讀取消息,因此從streamReader的goroutine裏面開始:
// etcd-2.3.7/rafthttp/stream.go
func (cr *streamReader) run() {
for {
t := cr.t
// 向peer發起TCP三次握手
rc, err := cr.dial(t)
if err != nil {
// ...
} else {
cr.status.activate()
// 真正的執行IO讀取的loop,直到連接被關閉
err := cr.decodeLoop(rc, t)
switch {
// all data is read out
case err == io.EOF:
// connection is closed by the remote
case isClosedConnectionError(err):
default:
cr.status.deactivate(failureType{source: t.String(), action: "read"}, err.Error())
}
}
// 如果一次沒有連接成功嘗試多次連接,特別是系統剛剛啓動的時候,很難確定那個raft節點先啓動,因此嘗試多次連接很有必要。
select {
// Wait 100ms to create a new stream, so it doesn't bring too much
// overhead when retry.
case <-time.After(100 * time.Millisecond):
case <-cr.stopc:
close(cr.done)
return
}
}
}
func (cr *streamReader) decodeLoop(rc io.ReadCloser, t streamType) error {
var dec decoder
cr.mu.Lock()
switch t {
// 通過Tcp conn建立一個解碼器
case streamTypeMsgAppV2:
dec = newMsgAppV2Decoder(rc, cr.local, cr.remote)
case streamTypeMessage:
dec = &messageDecoder{r: rc}
default:
plog.Panicf("unhandled stream type %s", t)
}
cr.closer = rc
cr.mu.Unlock()
// 循環的從通過解碼器從TCP數據流返回一個個消息
for {
m, err := dec.decode()
// ...
// 如果是寫請求通過propc向消息下游轉發,否則使用recvc
recvc := cr.recvc
if m.Type == raftpb.MsgProp {
recvc = cr.propc
}
select {
case recvc <- m:
default:
if cr.status.isActive() {
plog.MergeWarningf("dropped internal raft message from %s since receiving buffer is full (overloaded network)", types.ID(m.From))
}
plog.Debugf("dropped %s from %s since receiving buffer is full", m.Type, types.ID(m.From))
}
}
}
在streamReader中主要完成從TCP數據流中解碼消息,並把消息通過recvc或propc傳遞到消息處理的下游,在上面介紹streamReader類時,介紹過recvc以及propc與peer的recvc以及propc指向同一個channel,接收消息並處理消息的流程主要在Peer.startPeer中,在startPeer中你不僅能看到peer從recvc以及propc中接收消息的過程還能看到streamReader以及streamWriter的創建流程,這裏只分析消息的傳遞流程,stream***創建主要是參數的傳遞,感興趣的可以追下參數:
// etcd-2.3.7/rafthttp/peer.go
func startPeer(transport *Transport, urls types.URLs, local, to, cid types.ID, r Raft, fs *stats.FollowerStats, errorc chan error, v3demo bool) *peer {
status := newPeerStatus(to)
picker := newURLPicker(urls)
p := &peer{
id: to,
r: r,
v3demo: v3demo,
status: status,
picker: picker,
msgAppV2Writer: startStreamWriter(to, status, fs, r),
writer: startStreamWriter(to, status, fs, r),
pipeline: newPipeline(transport, picker, local, to, cid, status, fs, r, errorc),
snapSender: newSnapshotSender(transport, picker, local, to, cid, status, r, errorc),
sendc: make(chan raftpb.Message),
recvc: make(chan raftpb.Message, recvBufSize),
propc: make(chan raftpb.Message, maxPendingProposals),
stopc: make(chan struct{}),
}
ctx, cancel := context.WithCancel(context.Background())
p.cancel = cancel
// 下面啓動了兩個goroutine分別從recvc以及propc中讀取消息,無論從那個channel讀取消息,都最終把消息傳遞給了EtcdServer.Process
go func() {
for {
select {
case mm := <-p.recvc:
if err := r.Process(ctx, mm); err != nil {
plog.Warningf("failed to process raft message (%v)", err)
}
case <-p.stopc:
return
}
}
}()
// r.Process might block for processing proposal when there is no leader.
// Thus propc must be put into a separate routine with recvc to avoid blocking
// processing other raft messages.
go func() {
for {
select {
case mm := <-p.propc:
if err := r.Process(ctx, mm); err != nil {
plog.Warningf("failed to process raft message (%v)", err)
}
case <-p.stopc:
return
}
}
}()
p.msgAppV2Reader = startStreamReader(transport, picker, streamTypeMsgAppV2, local, to, cid, status, p.recvc, p.propc, errorc)
p.msgAppReader = startStreamReader(transport, picker, streamTypeMessage, local, to, cid, status, p.recvc, p.propc, errorc)
return p
}
EtcdServer的Process方法主要代碼如下:
// etcd-2.3.7/etcdserver/server.go
func (s *EtcdServer) Process(ctx context.Context, m raftpb.Message) error {
// ...
return s.r.Step(ctx, m)
}
通過分析Process的代碼,消息最終又被傳遞給了raftNode的Step方法,雖然raftNode沒有直接實現Step,但是raftNode嵌入了接口raft.Node,並且這個接口最終指向一個raft.node,因此我們直接跳到raft.node的Step方法:
// etcd-2.3.7/raft/node.go
func (n *node) Step(ctx context.Context, m pb.Message) error {
// ignore unexpected local messages receiving over network
if IsLocalMsg(m) {
// TODO: return an error?
return nil
}
return n.step(ctx, m)
}
func (n *node) step(ctx context.Context, m pb.Message) error {
ch := n.recvc
if m.Type == pb.MsgProp {
ch = n.propc
}
select {
case ch <- m:
return nil
case <-ctx.Done():
return ctx.Err()
case <-n.done:
return ErrStopped
}
}
通過上面代碼實現我們可以看到消息被髮送到node的recvc和propc(不過這裏要區別與peer的recvc以及propc,不同),消息傳入channel後在raft.node的run方法中被接收:
func (n *node) run(r *raft) {
// ...
for {
// ...
select {
// ...
case m := <-propc:
m.From = r.id
r.Step(m)
case m := <-n.recvc:
// filter out response message from unknown From.
if _, ok := r.prs[m.From]; ok || !IsResponseMsg(m) {
r.Step(m) // raft never returns an error
}
// ...
}
}
通過上面的代碼可以知道,無論消息從propc還是recvc收到都最終傳入raft.raft.Step,交給你raft算法去處理了,最終會把消息傳入raft.raft.step裏面的一個step函數變量,這個變量在當前節點在集羣中扮演的角色不同指向的函數函數不同,step指向的函數有三個:stepLeader、stepCandidate以及stepFollower,經過這三個中的任何一個函數處理完之後發送出去(發送的流程向上翻)。