mongodb內核源碼實現、性能調優、最佳運維實踐系列-百萬級高併發mongodb集羣性能數十倍提升優化實踐(上篇)

{"type":"doc","content":[{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"說明"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"     本文是oppo互聯網某百億級數據量/百萬級高併發mongodb集羣線上真實優化案例,榮獲mongodb中文社區2019年度一等獎。"}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"關於作者"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 前滴滴出行技術專家,現任OPPO文檔數據庫mongodb負責人,負責oppo千萬級峯值TPS/十萬億級數據量文檔數據庫mongodb研發和運維工作,一直專注於分佈式緩存、高性能服務端、數據庫、中間件等相關研發。後續持續分享《MongoDB內核源碼設計、性能優化、最佳運維實踐》,Github賬號地址:"},{"type":"link","attrs":{"href":"https://github.com/y123456yz","title":null},"content":[{"type":"text","marks":[{"type":"underline"}],"text":"https://github.com/y123456yz"}]}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"1. 背景"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 線上某集羣峯值TPS超過100萬/秒左右(主要爲寫流量,讀流量很低),峯值tps幾乎已經到達集羣上限,同時平均時延也超過100ms,隨着讀寫流量的進一步增加,時延抖動嚴重影響業務可用性。該集羣採用mongodb天然的分片模式架構,數據均衡的分佈於各個分片中,添加片鍵啓用分片功能後實現完美的負載均衡。集羣每個節點流量監控如下圖所示:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/30/30383049907ec5ff2683cc0b80e461b0.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/ef/ef5f860a24bc0d350867c693c9b43e7e.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"   從上圖可以看出集羣流量比較大,峯值已經突破120萬/秒,其中delete過期刪除的流量不算在總流量裏面(delete由主觸發刪除,但是主上面不會顯示,只會在從節點拉取oplog的時候顯示)。如果算上主節點的delete流量,總tps超過150萬/秒。"}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"2. 軟件優化"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在不增加服務器資源的情況下,首先做了如下軟件層面的優化,並取得了理想的數倍性能提升:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1. 業務層面優化"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2. Mongodb配置優化"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"3. 存儲引擎優化"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"2.1業務層面優化"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 該集羣總文檔數百億條,每條文檔記錄默認保存三天,業務隨機散列數據到三天後任意時間點隨機過期淘汰。由於文檔數目很多,白天平峯監控可以發現從節點經常有大量delete操作,甚至部分時間點delete刪除操作數已經超過了業務方讀寫流量,因此考慮把delete過期操作放入夜間進行,過期索引添加方法如下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Db.collection.createIndex( { \"expireAt\": 1 }, { expireAfterSeconds: 0 } )"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 上面的過期索引中expireAfterSeconds=0,代表collection集合中的文檔的過期時間點在expireAt時間點過期,例如:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"     db.collection.insert( {"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"   //表示該文檔在夜間凌晨1點這個時間點將會被過期刪除"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"   \"expireAt\": new Date('July 22, 2019 01:00:00'),    "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"   \"logEvent\": 2,"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"   \"logMessage\": \"Success!\""}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" } )"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 通過隨機散列expireAt在三天後的凌晨任意時間點,即可規避白天高峯期觸發過期索引引入的集羣大量delete,從而降低了高峯期集羣負載,最終減少業務平均時延及抖動。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Delete過期Tips1: expireAfterSeconds含義"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"1.在"},{"type":"text","text":"expireAt指定的絕對時間點過期,也就是12.22日凌晨2:01過期"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Db.collection.createIndex( { \"expireAt\": 1 }, { expireAfterSeconds: 0 } )"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"db.log_events.insert( { \"expireAt\": new Date(Dec 22, 2019 02:01:00'),\"logEvent\": 2,\"logMessage\": \"Success!\"})"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"2. 在"},{"type":"text","text":"expireAt指定的時間往後推遲expireAfterSeconds秒過期,也就是當前時間往後推遲60秒過期"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"    db.log_events.insert( {\"createdAt\": new Date(),\"logEvent\": 2,\"logMessage\": \"Success!\"} )"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Db.collection.createIndex( { \"expireAt\": 1 }, { expireAfterSeconds: 60 } )"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Delete過期Tips2:爲何mongostat只能監控到從節點有delete操作,主節點沒有?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"原因是過期索引只在master主節點觸發,觸發後主節點會直接刪除調用對應wiredtiger存儲引擎接口做刪除操作,不會走正常的客戶端鏈接處理流程,因此主節點上看不到delete統計。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"主節點過期delete後會生存對於的delete oplog信息,從節點通過拉取主節點oplog然後模擬對於client回放,這樣就保證了主數據刪除的同時從數據也得以刪除,保證數據最終一致性。從節點模擬client回放過程將會走正常的client鏈接過程,因此會記錄delete count統計,詳見如下代碼:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"官方參考如下: "},{"type":"link","attrs":{"href":"https://docs.mongodb.com/manual/tutorial/expire-data/","title":null},"content":[{"type":"text","marks":[{"type":"underline"}],"text":"https://docs.mongodb.com/manual/tutorial/expire-data/"}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"2.2 Mongodb配置優化(網絡IO複用,網絡IO和磁盤IO做分離)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 由於集羣tps高,同時整點有大量推送,因此整點併發會更高,mongodb默認的一個請求一個線程這種模式將會嚴重影響系統負載,該默認配置不適合高併發的讀寫應用場景。官方介紹如下:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/3a/3a2ec126bc8dc468bdc3688e1321390d.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2.2.1 Mongodb內部網絡線程模型實現原理"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"mongodb默認網絡模型架構是一個客戶端鏈接,mongodb會創建一個線程處理該鏈接fd的所有讀寫請求及磁盤IO操作。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Mongodb默認網絡線程模型不適合高併發讀寫原因如下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 1.在高併發的情況下,瞬間就會創建大量的線程,例如線上的這個集羣,連接數會瞬間增加到1萬左右,也就是操作系統需要瞬間創建1萬個線程,這樣系統load負載就會很高。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 2.此外,當鏈接請求處理完,進入流量低峯期的時候,客戶端連接池回收鏈接,這時候mongodb服務端就需要銷燬線程,這樣進一步加劇了系統負載,同時進一步增加了數據庫的抖動,特別是在PHP這種短鏈接業務中更加明顯,頻繁的創建線程銷燬線程造成系統高負債。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"    3.一個鏈接一個線程,該線程除了負責網絡收發外,還負責寫數據到存儲引擎,整個網絡I/O處理和磁盤I/O處理都由同一個線程負責,本身架構設計就是一個缺陷。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2.2.2網絡線程模型優化方法"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"     爲了適應高併發的讀寫場景,mongodb-3.6開始引入serviceExecutor: adaptive配置,該配置根據請求數動態調整網絡線程數,並儘量做到網絡IO複用來降低線程創建消耗引起的系統高負載問題。此外,加上serviceExecutor: adaptive配置後,藉助boost:asio網絡模塊實現網絡IO複用,同時實現網絡IO和磁盤IO分離。這樣高併發情況下,通過網絡鏈接IO複用和mongodb的鎖操作來控制磁盤IO訪問線程數,最終降低了大量線程創建和消耗帶來的高系統負載,最終通過該方式提升高併發讀寫性能。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2.2.3網絡線程模型優化前後性能對比"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 在該大流量集羣中增加serviceExecutor: adaptive配置實現網絡IO複用及網絡IO與磁盤IO做分離後,該大流量集羣時延大幅度降低,同時系統負載和慢日誌也減少很多,具體如下:"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2.2.3.1優化前後系統負載對比"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":" 驗證方式:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1. 該集羣有多個分片,其中一個分片配置優化後的主節點和同一時刻未優化配置的主節點load負載比較:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "},{"type":"text","marks":[{"type":"strong"}],"text":"未優化配置的load"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/73/73d9d16efdaf7107efe775da3b861622.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":" 優化配置的load"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/7e/7e699c24ac2645ac0c80e61462a3f51c.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2.2.3.2優化前後慢日誌對比"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"驗證方式:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 該集羣有多個分片,其中一個分片配置優化後的主節點和同一時刻未優化配置的主節點慢日誌數比較:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"同一時間的慢日誌數統計:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 未優化配置的慢日誌數"},{"type":"text","marks":[{"type":"strong"}],"text":"(19621)"},{"type":"text","text":":"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/80/80426acdfc5e933ae123e0f87fffb7b0.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"     "},{"type":"text","text":"優化配置後的慢日誌數"},{"type":"text","marks":[{"type":"strong"}],"text":"(5222):"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/33/33686475f8228c6239d7c8c5587fbeeb.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2.2.3.3優化前後平均時延對比"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"驗證方式:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 該集羣所有節點加上網絡IO複用配置後與默認配置的平均時延對比如下:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/97/97516bb3680372b3f56b15f37fc12b6a.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"    從上圖可以看出,網絡IO複用後時延降低了1-2倍。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"2.3 wiredtiger存儲引擎優化"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 從上一節可以看出平均時延從200ms降低到了平均80ms左右,很顯然平均時延還是很高,如何進一步提升性能降低時延?繼續分析集羣,我們發現磁盤IO一會兒爲0,一會兒持續性100%,並且有跌0現象,現象如下:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/f6/f65eb65a729fa4ebe872445e8ba1865a.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 從圖中可以看出,I/O寫入一次性到2G,後面幾秒鐘內I/O會持續性阻塞,讀寫I/O完全跌0,avgqu-sz、awit巨大,util次序性100%,在這個I/O跌0的過程中,業務方反應的TPS同時跌0。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 此外,在大量寫入IO後很長一段時間util又持續爲0%,現象如下:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/41/416c81f8c206b5a395952772cec7fac4.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 總體IO負載曲線如下:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/e2/e2afae45b3d23239956376a256e7f757.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 從圖中可以看出IO很長一段時間持續爲0%,然後又飆漲到100%持續很長時間,當IO util達到100%後,分析日誌發現又大量滿日誌,同時mongostat監控流量發現如下現象:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/1c/1c7cb73282c657e83b43ab94623f755a.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/a2/a250466b6f202a7424037b050499445c.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 從上可以看出我們定時通過mongostat獲取某個節點的狀態的時候,經常超時,超時的時候剛好是io util=100%的時候,這時候IO跟不上客戶端寫入速度造成阻塞。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"有了以上現象,我們可以確定問題是由於IO跟不上客戶端寫入速度引起,第2章我們已經做了mongodb服務層的優化,現在我們開始着手wiredtiger存儲引擎層面的優化,主要通過以下幾個方面:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1. cachesize調整"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2. 髒數據淘汰比例調整"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"3. checkpoint優化"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2.3.1 cachesize調整優化(爲何cacheSize越大性能越差)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 前面的IO分析可以看出,超時時間點和I/O阻塞跌0的時間點一致,因此如何解決I/O跌0成爲了解決改問題的關鍵所在。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 找個集羣平峯期(總tps50萬/s)查看當時該節點的TPS,發現TPS不是很高,單個分片也就3-4萬左右,爲何會有大量的刷盤,瞬間能夠達到10G/S,造成IO util持續性跌0(因爲IO跟不上寫入速度)。繼續分析wiredtiger存儲引擎刷盤實現原理,wiredtiger存儲引擎是一種B+樹存儲引擎,mongodb文檔首先轉換爲KV寫入wiredtiger,在寫入過程中,內存會越來越大,當內存中髒數據和內存總佔用率達到一定比例,就開始刷盤。同時當達到checkpoint限制也會觸發刷盤操作,查看任意一個mongod節點進程狀態,發現消耗的內存過多,達到110G,如下圖所示:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/da/dae1adb121a267d17deb5fc14421cd4a.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 於是查看mongod.conf配置文件,發現配置文件中配置的cacheSizeGB: 110G,可以看出,存儲引擎中KV總量幾乎已經達到110G,按照5%髒頁開始刷盤的比例,峯值情況下cachesSize設置得越大,裏面得髒數據就會越多,而磁盤IO能力跟不上髒數據得產生速度,這種情況很可能就是造成磁盤I/O瓶頸寫滿,並引起I/O跌0的原因。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 此外,查看該機器的內存,可以看到內存總大小爲190G,其中已經使用110G左右,幾乎是mongod的存儲引起佔用,這樣會造成內核態的page cache減少,大量寫入的時候內核cache不足就會引起磁盤缺頁中斷,引起大量的寫盤。"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/26/260ac5a09a912d7b97ab2b7ddde225bf.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":" 解決辦法:"},{"type":"text","text":"通過上面的分析問題可能是大量寫入的場景,髒數據太多容易造成一次性大量I/O寫入,於是我們可以考慮把存儲引起cacheSize調小到50G,來減少同一時刻I/O寫入的量,從而規避峯值情況下一次性大量寫入的磁盤I/O打滿阻塞問題。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2.3.2存儲引擎dirty髒數據淘汰優化"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 調整cachesize大小解決了5s請求超時問題,對應告警也消失了,但是問題還是存在,5S超時消失了,1s超時問題還是偶爾會出現。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"     因此如何在調整cacheSize的情況下進一步規避I/O大量寫的問題成爲了問題解決的關鍵,進一步分析存儲引擎原理,如何解決內存和I/O的平衡關係成爲了問題解決的關鍵,mongodb默認存儲因爲wiredtiger的cache淘汰策略相關的幾個配置如下:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/96/96da23500537f76fb7e71cfb6b25e518.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"   調整cacheSize從120G到50G後,如果髒數據比例達到5%,則極端情況下如果淘汰速度跟不上客戶端寫入速度,這樣還是容易引起I/O瓶頸,最終造成阻塞。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"解決辦法:"},{"type":"text","text":"如何進一步減少持續性I/O寫入,也就是如何平衡cache內存和磁盤I/O的關係成爲問題關鍵所在。從上表中可以看出,如果髒數據及總內佔用存達到一定比例,後臺線程開始選擇page進行淘汰寫盤,如果髒數據及內存佔用比例進一步增加,那麼用戶線程就會開始做page淘汰,這是個非常危險的阻塞過程,造成用戶請求驗證阻塞。平衡cache和I/O的方法:調整淘汰策略,讓後臺線程儘早淘汰數據,避免大量刷盤,同時降低用戶線程閥值,避免用戶線程進行page淘汰引起阻塞。優化調整存儲引起配置如下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"  eviction_target: 75%"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"  eviction_trigger:97%"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"  eviction_dirty_target: %3"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"  eviction_dirty_trigger:25%"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"  evict.threads_min:8"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"  evict.threads_min:12"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 總體思想是讓後臺evict儘量早點淘汰髒頁page到磁盤,同時調整evict淘汰線程數來加快髒數據淘汰,調整後mongostat及客戶端超時現象進一步緩解。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2.3.3存儲引擎checkpoint優化調整"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 存儲引擎得checkpoint檢測點,實際上就是做快照,把當前存儲引擎的髒數據全部記錄到磁盤。觸發checkpoint的條件默認又兩個,觸發條件如下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1. 固定週期做一次checkpoint快照,默認60s"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2. 增量的redo log(也就是journal日誌)達到2G"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 當journal日誌達到2G或者redo log沒有達到2G並且距離上一次時間間隔達到60s,wiredtiger將會觸發checkpoint,如果在兩次checkpoint的時間間隔類evict淘汰線程淘汰的dirty page越少,那麼積壓的髒數據就會越多,也就是checkpoint的時候髒數據就會越多,造成checkpoint的時候大量的IO寫盤操作。如果我們把checkpoint的週期縮短,那麼兩個checkpoint期間的髒數據相應的也就會減少,磁盤IO 100%持續的時間也就會縮短。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" checkpoint調整後的值如下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" checkpoint=(wait=25,log_size=1GB)"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2.3.4存儲引擎優化前後IO對比"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 通過上面三個方面的存儲引擎優化後,磁盤IO開始平均到各個不同的時間點,iostat監控優化後的IO負載如下:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/52/52e463f2a9291e05817c74f50229ef00.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 從上面的io負載圖可以看出,之前的IO一會兒爲0%,一會兒100%現象有所緩解,總結如下圖所示:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/b9/b972da45daaa9d2a62fbfad1f0912119.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2.3.5存儲引擎優化前後時延對比"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 優化前後時延對比如下(注:該集羣有幾個業務同時使用,優化前後時延對比如下):"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/20/20fd0aa02a664ddd4cbde1cccd360e3c.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/65/651f48fac05562bb334d12b0aadfc8dc.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/7a/7ae13f86c104faafd45b0b8cb6e4a59d.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/28/28dbc63a154cae6ec37fe10ef4e7c41f.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/75/755b574b1323b963cceac0f978d08422.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 從上圖可以看出,存儲引擎優化後時間延遲進一步降低並趨於平穩,從平均80ms到平均20ms左右,但是還是不完美,有抖動。"}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"3服務器系統磁盤IO問題解決"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"3.1服務器IO硬件問題背景"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 如第3節所述,當wiredtiger大量淘汰數據後,發現只要每秒磁盤寫入量超過500M/s,接下來的幾秒鐘內util就會持續100%,w/s幾乎跌0,於是開始懷疑磁盤硬件存在缺陷。"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/77/774ad7cc404ef9d8bbc803cba7670476.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/a5/a50e575009426392df6bfe2a3440a6de.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 從上圖可以看出磁盤爲nvMe的ssd盤,查看相關數據可以看出該盤IO性能很好,支持每秒2G寫入,iops能達到2.5W/S,而我們線上的盤只能每秒寫入最多500M。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"3.2服務器IO硬件問題解決後性能對比"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 於是考慮把該分片集羣的主節點全部遷移到另一款服務器,該服務器也是ssd盤,io性能達到2G/s寫入(注意:只遷移了主節點,從節點還是在之前的IO-500M/s的服務器)。 遷移完成後,發現性能得到了進一步提升,時延遲降低到2-4ms/s,三個不同業務層面看到的時延監控如下圖所示:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/1b/1b47c09a588fd9d6c7a43e6f19915e06.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/30/30fe6b07afdd788a26c7aecd56171576.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/30/30ff71a928069255447bd051a62edc1e.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 從上圖時延可以看出,遷移主節點到IO能力更好的機器後,時延進一步降低到平均2-4ms。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"雖然時延降低到了平均2-4ms,但是還是有很多幾十ms的尖刺,鑑於篇幅將在下一期分享大家原因,最終保存所有時延控制在5ms以內,並消除幾十ms的尖刺。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 此外,nvme的ssd io瓶頸問題原因,經過和廠商確認分析,最終定位到是linux內核版本不匹配引起,如果大家nvme ssd盤有同樣問題,記得升級linux版本到3.10.0-957.27.2.el7.x86_64版本,升級後nvme ssd的IO能力達到2G/s以上寫入。"}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"4 總結及遺留問題"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 通過mongodb服務層配置優化、存儲引擎優化、硬件IO提升三方面的優化後,該大流量寫入集羣的平均時延從之前的平均數百ms降低到了平均2-4ms,整體性能提升數十倍,效果明顯。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":" 但是,"},{"type":"text","text":"從4.2章節優化後的時延可以看出,集羣偶爾還是會有抖動,鑑於篇幅,下期會分享如果消除4.2章節中的時延抖動,最終保持時間完全延遲控制在2-4ms,並且無任何超過10ms的抖動,敬請期待,下篇會更加精彩。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 此外,在集羣優化過程中採了一些坑,下期會繼續分析大流量集羣採坑記。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"注意:文章中的一些優化方法並不是一定適用於所有mongodb場景,請根據實際業務場景和硬件資源能力進行優化,而不是按部就班。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章