字節跳動 Go RPC 框架 KiteX 性能優化實踐

{"type":"doc","content":[{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"本文選自“字節跳動基礎架構實踐”系列文章。“字節跳動基礎架構實踐”系列文章是由字節跳動基礎架構部門各技術團隊及專家傾力打造的技術乾貨內容,和大家分享團隊在基礎架構發展和演進過程中的實踐經驗與教訓,與各位技術同學一起交流成長。KiteX 自 2020.04 正式發佈以來,公司內部服務數量 8k+,QPS 過億。經過持續迭代,KiteX 在吞吐和延遲表現上都取得了顯著收益。本文將簡單分享一些較有成效的優化方向,希望爲大家提供參考。"}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"前言"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"KiteX 是字節跳動框架組研發的下一代高性能、強可擴展性的 Go RPC 框架。除具備豐富的服務治理特性外,相比其他框架還有以下特點:集成了自研的網絡庫 Netpoll;支持多消息協議(Thrift、Protobuf)和多交互方式(Ping-Pong、Oneway、 Streaming);提供了更加靈活可擴展的代碼生成器。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目前公司內主要業務線都已經大範圍使用 KiteX,據統計當前接入服務數量多達 8k。KiteX 推出後,我們一直在不斷地優化性能,本文將分享我們在 Netpoll 和 序列化方面的優化工作。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"自研網絡庫 Netpoll 優化"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"自研的基於 epoll 的網絡庫 —— Netpoll,在性能方面有了較爲顯著的優化。測試數據表明,當前版本(2020.12) 相比於"},{"type":"link","attrs":{"href":"http:\/\/mp.weixin.qq.com\/s?__biz=MzI1MzYzMjE0MQ==&mid=2247485756&idx=1&sn=4d2712e4bfb9be27a790fa15159a7be1&chksm=e9d0c2dedea74bc8179af39888a5b2b99266587cad32744ad11092b91ec2e2babc74e69090e6&scene=21#wechat_redirect","title":null,"type":null},"content":[{"type":"text","text":"上次分享"}],"marks":[{"type":"underline"}]},{"type":"text","text":"時(2020.05),吞吐能力 "},{"type":"text","marks":[{"type":"strong"}],"text":"↑30%"},{"type":"text","text":",延遲 AVG "},{"type":"text","marks":[{"type":"strong"}],"text":"↓25%"},{"type":"text","text":",TP99 "},{"type":"text","marks":[{"type":"strong"}],"text":"↓67%"},{"type":"text","text":",性能已遠超官方 net 庫。以下,我們將分享兩點顯著提升性能的方案。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"epoll_wait 調度延遲優化"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Netpoll 在剛發佈時,遇到了延遲 AVG 較低,但 TP99 較高的問題。經過認真研究 epoll_wait,我們發現結合 polling 和 event trigger 兩種模式,並優化調度策略,可以顯著降低延遲。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"首先我們來看 Go 官方提供的 syscall.EpollWait 方法:"}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"func EpollWait(epfd int, events []EpollEvent, msec int) (n int, err error)\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"這裏共提供 3 個參數,分別表示 epoll 的 fd、回調事件、等待時間,其中只有 msec 是動態可調的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"通常情況下,我們主動調用 EpollWait 都會設置 msec=-1,即無限等待事件到來。事實上不少開源網絡庫也是這麼做的。但是我們研究發現,msec=-1 並不是最優解。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"epoll_wait 內核源碼(如下) 表明,msec=-1 比 msec=0 增加了 fetch_events 檢查,因此耗時更長。"}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,\n                   int maxevents, long timeout)\n{\n    ...\n    if (timeout > 0) {\n       ...\n    } else if (timeout == 0) {\n        ...\n        goto send_events;\n    }\n\nfetch_events:\n    ...\n    if (eavail)\n        goto send_events;\n\nsend_events:\n    ...\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Benchmark 表明,在有事件觸發的情況下,msec=0 比 msec=-1 調用要快 18% 左右,因此在頻繁事件觸發場景下,使用 msec=0 調用明顯是更優的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/33\/33cbf0829d687d5425998cfd55505cc7.png","alt":"圖片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"而在無事件觸發的場景下,使用 msec=0 顯然會造成無限輪詢,空耗大量資源。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"綜合考慮後,我們更希望在有事件觸發時,使用 msec=0 調用,而在無事件時,使用 msec=-1 來減少輪詢開銷。僞代碼如下:"}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"var msec = -1\nfor {\n   n, err = syscall.EpollWait(epfd, events, msec)\n   if n <= 0 {\n      msec = -1\n      continue\n   }\n   msec = 0\n   ...\n}\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"那麼這樣就可以了嗎?事實證明優化效果並不明顯。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們再做思考:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"msec=0 僅單次調用耗時減少 50ns,影響太小,如果想要進一步優化,必須要在調度邏輯上做出調整。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"進一步思考:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"上述僞代碼中,當無事件觸發,調整 msec=-1 時,直接 continue 會立即再次執行 EpollWait,而由於無事件,msec=-1,當前 goroutine 會 block 並被 P 切換。但是被動切換效率較低,如果我們在 continue 前主動爲 P 切換 goroutine,則可以節約時間。因此我們將上述僞代碼改爲如下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"var msec = -1\nfor {\n   n, err = syscall.EpollWait(epfd, events, msec)\n   if n <= 0 {\n      msec = -1\n      runtime.Gosched()\n      continue\n   }\n   msec = 0\n   ...\n}\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"測試表明,調整代碼後,吞吐量 "},{"type":"text","marks":[{"type":"strong"}],"text":"↑12%"},{"type":"text","text":",TP99 "},{"type":"text","marks":[{"type":"strong"}],"text":"↓64%"},{"type":"text","text":",獲得了顯著的延遲收益。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"合理利用 unsafe.Pointer"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"繼續研究 epoll_wait,我們發現 Go 官方對外提供的 syscall.EpollWait 和 runtime 自用的 epollwait 是不同的版本,即兩者使用了不同的 EpollEvent。以下我們展示兩者的區別:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"\/\/ @syscall\ntype EpollEvent struct {\n   Events uint32\n   Fd     int32\n   Pad    int32\n}\n\/\/ @runtime\ntype epollevent struct {\n   events uint32\n   data   [8]byte \/\/ unaligned uintptr\n}\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們看到,runtime 使用的 epollevent 是系統層 epoll 定義的原始結構;而對外版本則對其做了封裝,將 epoll_data(epollevent.data) 拆分爲固定的兩字段:Fd 和 Pad。那麼 runtime 又是如何使用的呢?在源碼裏我們看到這樣的邏輯:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"*(**pollDesc)(unsafe.Pointer(&ev.data)) = pd\n\npd := *(**pollDesc)(unsafe.Pointer(&ev.data))\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"顯然,runtime 使用 epoll_data(&ev.data) 直接存儲了 fd 對應結構體(pollDesc)的指針,這樣在事件觸發時,可以直接找到結構體對象,並執行相應邏輯。而對外版本則由於只能獲得封裝後的 Fd 參數,因此需要引入額外的 Map 來增刪改查結構體對象,這樣性能肯定相差很多。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"所以我們果斷拋棄了 syscall.EpollWait,轉而仿照 runtime 自行設計了 EpollWait 調用,同樣採用 unsafe.Pointer 存取結構體對象。測試表明,該方案下 吞吐量 "},{"type":"text","marks":[{"type":"strong"}],"text":"↑10%"},{"type":"text","text":",TP99 "},{"type":"text","marks":[{"type":"strong"}],"text":"↓10%"},{"type":"text","text":",獲得了較爲明顯的收益。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"Thrift 序列化\/反序列化優化"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"序列化是指把數據結構或對象轉換成字節序列的過程,反序列化則是相反的過程。RPC 在通信時需要約定好序列化協議,client 在發送請求前進行序列化,字節序列通過網絡傳輸到 server,server 再反序列進行邏輯處理,完成一次 RPC 請求。Thrift 支持 Binary、Compact 和 JSON 序列化協議。目前公司內部使用的基本都是 Binary,這裏只介紹 Binary 協議。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Binary 採用 TLV 編碼實現,即每個字段都由 TLV 結構來描述,TLV 意爲:Type 類型, Lenght 長度,Value 值,Value 也可以是個 TLV 結構,其中 Type 和 Length 的長度固定,Value 的長度則由 Length 的值決定。TLV 編碼結構簡單清晰,並且擴展性較好,但是由於增加了 Type 和 Length,有額外的內存開銷,特別是在大部分字段都是基本類型的情況下有不小的空間浪費。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"序列化和反序列的性能優化從大的方面來看可以從空間和時間兩個維度進行優化。從兼容已有的 Binary 協議來看,空間上的優化似乎不太可行,只能從時間維度進行優化,包括:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"減少內存操作次數,包括內存分配和拷貝,儘量預分配內存,減少不必要的開銷;"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"減少函數調用次數,比如可調整代碼結構和 inline 等手段進行優化;"}]}]}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"調研"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"根據 go_serialization_benchmarks 的壓測數據,我們找到了一些性能卓越的序列化方案進行調研,希望能夠對我們的優化工作有所啓發。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"通過對 protobuf、gogoprotobuf 和 Cap'n Proto 的分析,我們得出以下結論:"}]},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"網絡傳輸中出於 IO 的考慮,都會盡量壓縮傳輸數據,protobuf 採用了 Varint 編碼在大部分場景中都有着不錯的壓縮效果;"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"gogoprotobuf 採用預計算方式,在序列化時能夠減少內存分配次數,進而減少了內存分配帶來的系統調用、鎖和 GC 等代價;"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":"Cap'n Proto 直接操作 buffer,也是減少了內存分配和內存拷貝(少了中間的數據結構),並且在 struct pointer 的設計中把固定長度類型數據和非固定長度類型數據分開處理,針對固定長度類型可以快速處理;"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"從兼容性考慮,不可能改變現有的 TLV 編碼格式,因此數據壓縮不太現實,但是 2 和 3 對我們的優化工作是有啓發的,事實上我們也是採取了類似的思路。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"思路"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"減少內存操作"}]},{"type":"heading","attrs":{"align":null,"level":5},"content":[{"type":"text","text":"buffer 管理"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"無論是序列化還是反序列化,都是從一塊內存拷貝數據到另一塊內存,這就涉及到內存分配和內存拷貝操作,儘量避免內存操作可以減少不必要的系統調用、鎖和 GC 等開銷。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"事實上 KiteX 已經提供了 LinkBuffer 用於 buffer 的管理,LinkBuffer 設計上採用鏈式結構,由多個 block 組成,其中 block 是大小固定的內存塊,構建對象池維護空閒 block,由此複用 block,減少內存佔用和 GC。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"剛開始我們簡單地採用 sync.Pool 來複用 netpoll 的 LinkBufferNode,但是這樣仍然無法解決對於大包場景下的內存複用(大的 Node 不能回收,否則會導致內存泄漏)。目前我們改成了維護一組 sync.Pool,每組中的 buffer size 都不同,新建 block 時根據最接近所需 size 的 pool 中去獲取,這樣可以儘可能複用內存,從測試來看內存分配和 GC 優化效果明顯。"}]},{"type":"heading","attrs":{"align":null,"level":5},"content":[{"type":"text","text":"string \/ binary 零拷貝"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"對於有一些業務,比如視頻相關的業務,會在請求或者返回中有一個很大的 Binary 二進制數據代表了處理後的視頻或者圖片數據,同時會有一些業務會返回很大的 String(如全文信息等)。這種場景下,我們通過火焰圖看到的熱點都在數據的 copy 上,那我們就想了,我們是否可以減少這種拷貝呢?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"答案是肯定的。既然我們底層使用的 Buffer 是個鏈表,那麼就可以很容易地在鏈表中間插入一個節點。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/86\/869319cc08d1d55bd3ad61a4f31356d4.png","alt":"圖片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們就採用了類似的思想,當序列化的過程中遇到了 string 或者 binary 的時候, 將這個節點的 buffer 分成兩段,在中間原地插入用戶的 string \/ binary 對應的 buffer,這樣可以避免大的 string \/ binary 的拷貝了。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"這裏再介紹一下,如果我們直接用 []byte(string) 去轉換一個 string 到 []byte 的話實際上是會發生一次拷貝的,原因是 Go 的設計中 string 是 immutable 的但是 []byte 是 mutable 的,所以這麼轉換的時候會拷貝一次;如果要不拷貝轉換的話,就需要用到 unsafe 了:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"func StringToSliceByte(s string) []byte {\n   l := len(s)\n   return *(*[]byte)(unsafe.Pointer(&reflect.SliceHeader{\n      Data: (*(*reflect.StringHeader)(unsafe.Pointer(&s))).Data,\n      Len:  l,\n      Cap:  l,\n   }))\n}\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"這段代碼的意思是,先把 string 的地址拿到,再拼裝上一個 slice byte 的 header,這樣就可以不拷貝數據而將 string 轉換成 []byte 了,不過要注意這樣生成的 []byte 不可寫,否則行爲未定義。"}]},{"type":"heading","attrs":{"align":null,"level":5},"content":[{"type":"text","text":"預計算"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"線上存在某些服務有大包傳輸的場景,這種場景下會引入不小的序列化 \/ 反序列化開銷。一般大包都是容器類型的大小非常大導致的,如果能夠提前計算出 buffer,一些 O(n) 的操作就能降到 O(1),減少了函數調用次數,在大包場景下也大量減少了內存分配的次數,帶來的收益是可觀的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"基本類型"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如果容器元素爲基本類型(bool, byte, i16, i32, i64, double)的話,由於基本類型大小固定,在序列化時是可以提前計算出總的大小,並且一次性分配足夠的 buffer,O(n) 的 malloc 操作次數可以降到 O(1),從而大量減少了 malloc 的次數,同理在反序列化時可以減少 next 的操作次數。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"struct 字段重排"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"上面的優化只能針對容器元素類型爲基本類型的有效,那麼對於元素類型爲 struct 的是否也能優化呢?答案是肯定的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"沿用上面的思路,假如 struct 中如果存在基本類型的 field,也可以預先計算出這些 field 的大小,在序列化時爲這些 field 提前分配 buffer,寫的時候也把這些 field 順序統一放到前面寫,這樣也能在一定程度上減少 malloc 的次數。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"一次性計算"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"上面提到的是基本類型的優化,如果在序列化時,先遍歷一遍 request 所有 field,便可以計算得到整個 request 的大小,提前分配好 buffer,在序列化和反序列時直接操作 buffer,這樣對於非基本類型也能有優化效果。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"定義新的 codec 接口:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"type thriftMsgFastCodec interface {\n   BLength() int \/\/ count length of whole req\/resp\n   FastWrite(buf []byte) int\n   FastRead(buf []byte) (int, error)\n}\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在 Marshal 和 Unmarshal 接口中做相應改造:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"func (c thriftCodec) Marshal(ctx context.Context, message remote.Message, out remote.ByteBuffer) error {\n    ...\n    if msg, ok := data.(thriftMsgFastCodec); ok {\n       msgBeginLen := bthrift.Binary.MessageBeginLength(methodName, thrift.TMessageType(msgType), int32(seqID))\n       msgEndLen := bthrift.Binary.MessageEndLength()\n       buf, err := out.Malloc(msgBeginLen + msg.BLength() + msgEndLen)\/\/ malloc once\n       if err != nil {\n          return perrors.NewProtocolErrorWithMsg(fmt.Sprintf(\"thrift marshal, Malloc failed: %s\", err.Error()))\n       }\n       offset := bthrift.Binary.WriteMessageBegin(buf, methodName, thrift.TMessageType(msgType), int32(seqID))\n       offset += msg.FastWrite(buf[offset:])\n       bthrift.Binary.WriteMessageEnd(buf[offset:])\n       return nil\n    }\n    ...\n}\n\nfunc (c thriftCodec) Unmarshal(ctx context.Context, message remote.Message, in remote.ByteBuffer) error {\n    ...\n    data := message.Data()\nif msg, ok := data.(thriftMsgFastCodec); ok && message.PayloadLen() != 0 {\n   msgBeginLen := bthrift.Binary.MessageBeginLength(methodName, msgType, seqID)\n   buf, err := tProt.next(message.PayloadLen() - msgBeginLen - bthrift.Binary.MessageEndLength()) \/\/ next once\n   if err != nil {\n      return remote.NewTransError(remote.PROTOCOL_ERROR, err.Error())\n   }\n   _, err = msg.FastRead(buf)\n   if err != nil {\n      return remote.NewTransError(remote.PROTOCOL_ERROR, err.Error())\n   }\n   err = tProt.ReadMessageEnd()\n   if err != nil {\n      return remote.NewTransError(remote.PROTOCOL_ERROR, err.Error())\n   }\n   tProt.Recycle()\n   return err\n   }\n   ...\n}\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"生成代碼中也做相應改造:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"func (p *Demo) BLength() int {\n        l := 0\n        l += bthrift.Binary.StructBeginLength(\"Demo\")\n        if p != nil {\n                l += p.field1Length()\n                l += p.field2Length()\n                l += p.field3Length()\n    ...\n        }\n        l += bthrift.Binary.FieldStopLength()\n        l += bthrift.Binary.StructEndLength()\n        return l\n}\n\nfunc (p *Demo) FastWrite(buf []byte) int {\n        offset := 0\n        offset += bthrift.Binary.WriteStructBegin(buf[offset:], \"Demo\")\n        if p != nil {\n                offset += p.fastWriteField2(buf[offset:])\n                offset += p.fastWriteField4(buf[offset:])\n                offset += p.fastWriteField1(buf[offset:])\n                offset += p.fastWriteField3(buf[offset:])\n        }\n        offset += bthrift.Binary.WriteFieldStop(buf[offset:])\n        offset += bthrift.Binary.WriteStructEnd(buf[offset:])\n        return offset\n}\n"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"使用 SIMD 優化 Thrift 編碼"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"公司內廣泛使用 list 類型來承載 ID 列表,並且 list 的編碼方式十分符合向量化的規律,於是我們用了 SIMD 來優化 list 的編碼過程。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們使用了 avx2,優化後的結果比較顯著,在大數據量下針對 i64 可以提升 6 倍性能,針對 i32 可以提升 12 倍性能;在小數據量下提升更明顯,針對 i64 可以提升 10 倍,針對 i32 可以提升 20 倍。"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"減少函數調用"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"inline"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"inline 是在編譯期間將一個函數調用原地展開,替換成這個函數的實現,它可以減少函數調用的開銷以提高程序的性能。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在 Go 中並不是所有函數都能 inline,使用參數"},{"type":"codeinline","content":[{"type":"text","text":"-gflags=\"-m\""}]},{"type":"text","text":"運行進程,可顯示被 inline 的函數。以下幾種情況無法內聯:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"包含循環的函數;"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"包含以下內容的函數:閉包調用,select,for,defer,go 關鍵字創建的協程;"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":"超過一定長度的函數,默認情況下當解析 AST 時,Go 申請了 80 個節點作爲內聯的預算。每個節點都會消耗一個預算。比如,"},{"type":"codeinline","content":[{"type":"text","text":"a = a + 1"}]},{"type":"text","text":" 這行代碼包含了 5 個節點:AS, NAME, ADD, NAME, LITERAL。當一個函數的開銷超過了這個預算,就無法內聯。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"編譯時通過指定參數"},{"type":"codeinline","content":[{"type":"text","text":"-l"}]},{"type":"text","text":"可以指定編譯器對代碼內聯的強度(go 1.9+),不過這裏不推薦大家使用,在我們的測試場景下是 buggy 的,無法正常運行:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"\/\/ The debug['l'] flag controls the aggressiveness. Note that main() swaps level 0 and 1, making 1 the default and -l disable. Additional levels (beyond -l) may be buggy and are not supported.\n\/\/      0: disabled\n\/\/      1: 80-nodes leaf functions, oneliners, panic, lazy typechecking (default)\n\/\/      2: (unassigned)\n\/\/      3: (unassigned)\n\/\/      4: allow non-leaf functions\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"內聯雖然可以減少函數調用的開銷,但是也可能因爲存在重複代碼,從而導致 CPU 緩存命中率降低,所以並不能盲目追求過度的內聯,需要結合 profile 結果來具體分析。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"go test -gcflags='-m=2' -v -test.run TestNewCodec 2>&1 | grep \"function too complex\" | wc -l\n48\n\ngo test -gcflags='-m=2 -l=4' -v -test.run TestNewCodec 2>&1 | grep \"function too complex\" | wc -l\n25\n\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"從上面的輸出結果可以看出,加強內聯程度確實減少了一些\"function too complex\",看下 benchmark 結果:"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/2b\/2b516d2da0bc9563a563f64520bafae9.png","alt":"圖片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"上面開啓最高程度的內聯強度,確實消除了不少因爲“function too complex”帶來無法內聯的函數,但是壓測結果顯示收益不太明顯。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"測試結果"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們構建了基準測試來對比優化前後的性能,下面是測試結果。"}]},{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"環境:Go 1.13.5 darwin\/amd64 on a 2.5 GHz Intel Core i7 16GB"}]}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"小包"}]},{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"data size: 20KB"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/29\/29557f24d77f522abf1739086581eb33.png","alt":"圖片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"大包"}]},{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"data size: 6MB"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/43\/433aecdab5a8b23de7eef8abf4addd43.png","alt":"圖片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"無拷貝序列化"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在一些 request 和 response 數據較大的服務中,序列化和反序列化的代價較高,有兩種優化思路:"}]},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"如前文所述進行序列化和反序列化的優化"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"以無拷貝序列化的方式進行調用"}]}]}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"調研"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"通過無拷貝序列化進行 RPC 調用,最早出自 Kenton Varda 的 Cap'n Proto 項目,Cap'n Proto 提供了一套數據交換格式和對應的編解碼庫。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Cap'n Proto 本質上是開闢一個 bytes slice 作爲 buffer ,所有對數據結構的讀寫操作都是直接讀寫 buffer,讀寫完成後,在頭部添加一些 buffer 的信息就可以直接發送,對端收到後即可讀取,因爲沒有 Go 語言結構體作爲中間存儲,所有無需序列化這個步驟,反序列化亦然。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"簡單總結下 Cap'n Proto 的特點:"}]},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"所有數據的讀寫都是在一段連續內存中"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"將序列化操作前置,在數據 Get\/Set 的同時進行編解碼"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":"在數據交換格式中,通過 pointer(數據存儲位置的 offset)機制,使得數據可以存儲在連續內存的任意位置,進而使得結構體中的數據可以以任意順序讀寫"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null},"content":[{"type":"text","text":"對於結構體的固定大小字段,通過重新排列,使得這些字段存儲在一塊連續內存中"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":5,"align":null,"origin":null},"content":[{"type":"text","text":"對於結構體的不定大小字段(如 list),則通過一個固定大小的 pointer 來表示,pointer 中存儲了包括數據位置在內的一些信息。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"首先 Cap'n Proto 沒有 Go 語言結構體作爲中間載體,得以減少一次拷貝,然後 Cap'n Proto 是在一段連續內存上進行操作,編碼數據的讀寫可以一次完成,因爲這兩個原因,使得 Cap' Proto 的性能表現優秀。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"下面是相同數據結構下 Thrift 和 Cap'n Proto 的 Benchmark,考慮到 Cap'n Proto 是將編解碼操作前置了,所以對比的是包括數據初始化在內的完整過程,即結構體數據初始化+(序列化)+寫入 buffer +從 buffer 讀出+(反序列化)+從結構體讀出數據。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"struct MyTest {\n 1: i64 Num,\n 2: Ano Ano,\n 3: list Nums, \/\/ 長度131072 大小1MB\n}\n\nstruct Ano {\n 1: i64 Num,\n}\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/c0\/c0947afb5cc956d86ad6c68575007ceb.png","alt":"圖片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"(反序列化)+讀出數據,視包大小,Cap'n Proto 性能大約是 Thrift 的 8-9 倍。寫入數據+(序列化),視包大小,Cap'n Proto 性能大約是 Thrift 的 2-8 倍。整體性能 Cap' Proto 性能大約是 Thrift 的 4-8 倍。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"前面說了 Cap'n Proto 的優勢,下面總結一下 Cap'n Proto 存在的一些問題:"}]},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"Cap'n Proto 的連續內存存儲這一特性帶來的一個問題:當對不定大小數據進行 resize ,且需要的空間大於原有空間時,只能在後面重新分配一塊空間,導致原來數據的空間成爲了一個無法去掉的 hole 。這個問題隨着調用鏈路的不斷 resize 會越來越嚴重,要解決只能在整個鏈路上嚴格約束:儘量避免對不定大小字段的 resize ,當不得不 resize 的時候,重新構建一個結構體並對數據進行深拷貝。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"Cap'n Proto 因爲沒有 Go 語言結構體作爲中間載體,使得所有的字段都只能通過接口進行讀寫,用戶體驗較差。"}]}]}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"Thrift 協議兼容的無拷貝序列化"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Cap'n Proto 爲了更好更高效地支持無拷貝序列化,使用了一套自研的編解碼格式,但在現在 Thrift 和 ProtoBuf 佔主流的環境中難以鋪開。爲了能在協議兼容的同時獲得無拷貝序列化的性能,我們開始了 Thrift 協議兼容的無拷貝序列化的探索。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Cap'n Proto 作爲無拷貝序列化的標杆,那麼我們就看看 Cap'n Proto 上的優化能否應用到 Thrift 上:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"自然是無拷貝序列化的核心,不使用 Go 語言結構體作爲中間載體,減少一次拷貝。此優化點是協議無關的,能夠適用於任何已有的協議,自然也能和 Thrift 協議兼容,但是從 Cap'n Proto 的使用上來看,用戶體驗還需要仔細打磨一下。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"Cap'n Proto 是在一段連續內存上進行操作,編碼數據的讀寫可以一次完成。Cap'n Proto 得以在連續內存上操作的原因:有 pointer 機制,數據可以存儲在任意位置,允許字段可以以任意順序寫入而不影響解碼。但是一方面,在連續內存上容易因爲誤操作,導致在 resize 的時候留下 hole,另一方面,Thrift 沒有類似於 pointer 的機制,故而對數據佈局有着更嚴格的要求。這裏有兩個思路:"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":"堅持在連續內存上進行操作,並對用戶使用提出嚴格要求:1. resize 操作必須重新構建數據結構 2. 當存在結構體嵌套時,對字段寫入順序有着嚴格要求(可以想象爲把一個存在嵌套的結構體從外往裏展開,寫入時需要按展開順序寫入),且因爲 Binary 等 TLV 編碼的關係,在每個嵌套開始寫入時,需要用戶主動聲明(如 StartWriteFieldX)。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null},"content":[{"type":"text","text":"不完全在連續內存上操作,局部內存連續,可變字段則單獨分配一塊內存,既然內存不是完全連續的,自然也無法做到一次寫操作便完成輸出。爲了儘可能接近一次寫完數據的性能,我們採取了一種鏈式 buffer 的方案,一方面當可變字段 resize 時只需替換鏈式 buffer 的一個節點,無需像 Cap'n Proto 一樣重新構建結構體,另一方面在需要輸出時無需像 Thrift 一樣需要感知實際的結構,只要把整個鏈路上的 buffer 寫入即可。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"先總結下目前確定的兩個點:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1. 不使用 Go 語言結構體作爲中間載體,通過接口直接操作底層內存,在 Get\/Set 時完成編解碼 "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2. 通過鏈式 buffer 存儲數據"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"然後讓我們看下目前還有待解決的問題:"}]},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"不使用 Go 語言結構體後帶來的用戶體驗劣化"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"解決方案:改善 Get\/Set 接口的使用體驗,儘可能做到和 Go 語言結構體同等的易用"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":"Cap'n Proto 的 Binary Format 是針對無拷貝序列化場景專門設計的,雖然每次 Get 時都會進行一次解碼,但是解碼代價非常小。而 Thrift 的協議(以 Binary 爲例),沒有類似於 pointer 的機制,當存在多個不定大小字段或者存在嵌套時,必須順序解析而無法直接通過計算偏移拿到字段數據所在的位置,而每次 Get 都進行順序解析的代價過於高昂。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null},"content":[{"type":"text","text":"解決方案:我們在表示結構體的時候,除了記錄結構體的 buffer 節點,還加了一個索引,裏面記錄了每個不定大小字段開始的 buffer 節點的指針。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"下面是目前的無拷貝序列化方案與 FastRead\/Write,在 4 核下的極限性能對比測試:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/43\/4318fa9369920b8aee50a19bd70e2de5.png","alt":"圖片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"測試結果概述:"}]},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"小包場景,無序列化性能表現較差,約爲 FastWrite\/FastRead 的 85%。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"大包場景,無序列化性能表現較好,4K 以上的包較 FastWrite\/FastRead 提升 7%-40%。"}]}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"後記"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"希望以上的分享能夠對社區有所幫助。同時,我們也在嘗試 share memory-based IPC、io_uring、tcp zero copy 、RDMA 等,更好地提升 KiteX 性能;重點優化同機、同容器的通訊場景。歡迎各位感興趣的同學加入我們,共同建設 Go 語言生態!"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"參考資料"}]},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"https:\/\/github.com\/alecthomas\/go_serialization_benchmarks"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"https:\/\/capnproto.org\/"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":"https:\/\/software.intel.com\/content\/www\/us\/en\/develop\/documentation\/cpp-compiler-developer-guide-and-reference\/top\/compiler-reference\/intrinsics\/intrinsics-for-intel-advanced-vector-extensions-2\/intrinsics-for-shuffle-operations-1\/mm256-shuffle-epi8.html"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null},"content":[{"type":"text","text":"本文轉載自:字節跳動技術團隊(ID:toutiaotechblog)"}]},{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null},"content":[{"type":"text","text":"原文鏈接:"},{"type":"link","attrs":{"href":"https:\/\/mp.weixin.qq.com\/s\/Xoaoiotl7ZQoG2iXo9_DWg","title":"xxx","type":null},"content":[{"type":"text","text":"字節跳動 Go RPC 框架 KiteX 性能優化實踐"}]}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章