MySQL 億級數據分頁的優化

{"type":"doc","content":[{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"背景","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"下班後愉快的坐在在回家的地鐵上,心裏想着週末的生活怎麼安排。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"突然電話響了起來,一看是我們的一個開發同學,頓時緊張了起來,本週的版本已經發布過了,這時候打電話一般來說是線上出問題了。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"果然,溝通的情況是線上的一個查詢數據的接口被瘋狂的失去理智般的調用,這個操作直接導致線上的MySql集羣被拖慢了。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"好吧,這問題算是嚴重了,下了地鐵匆匆趕到家,開電腦,跟同事把Pinpoint上的慢查詢日誌撈出來。看到一個很奇怪的查詢,如下","attrs":{}}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"POST  domain/v1.0/module/method?order=condition&orderType=desc&offset=1800000&limit=500\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"domain、module 和 method 都是化名,代表接口的域、模塊和實例方法名,後面的offset和limit代表分頁操作的偏移量和每頁的數量,也就是說該同學是在 翻第(1800000/500+1=3601)頁。初步撈了一下日誌,發現 有8000多次這樣調用。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"這太神奇了,而且我們頁面上的分頁單頁數量也不是500,而是 25條每頁,這個絕對不是人爲的在功能頁面上進行一頁一頁的翻頁操作,而是數據被刷了(說明下,我們生產環境數據有1億+)。詳細對比日誌發現,很多分頁的時間是重疊的,對方應該是多線程調用。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"通過對鑑權的Token的分析,基本定位了請求是來自一個叫做ApiAutotest的客戶端程序在做這個操作,也定位了生成鑑權Token的賬號來自一個QA的同學。立馬打電話給同學,進行了溝通和處理。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"分析","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"其實對於我們的MySQL查詢語句來說,整體效率還是可以的,該有的聯表查詢優化都有,該簡略的查詢內容也有,關鍵條件字段和排序字段該有的索引也都在,問題在於他一頁一頁的分頁去查詢,查到越後面的頁數,掃描到的數據越多,也就越慢。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們在查看前幾頁的時候,發現速度非常快,比如 limit 200,25,瞬間就出來了。但是越往後,速度就越慢,特別是百萬條之後,卡到不行,那這個是什麼原理呢。先看一下我們翻頁翻到後面時,查詢的sql是怎樣的:","attrs":{}}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"select * from t_name where c_name1='xxx' order by c_name2 limit 2000000,25;\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"這種查詢的慢,其實是因爲limit後面的偏移量太大導致的。比如像上面的 limit 2000000,25 ,這個等同於數據庫要掃描出 2000025條數據,然後再丟棄前面的 20000000條數據,返回剩下25條數據給用戶,這種取法明顯不合理。","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/05/05ea18c6acead9c7476977aef641a6c5.png","alt":"圖片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"大家翻看《高性能MySQL》第六章:查詢性能優化,對這個問題有過說明:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"分頁操作通常會使用limit加上偏移量的辦法實現,同時再加上合適的order by子句。但這會出現一個常見問題:當偏移量非常大的時候,它會導致MySQL掃描大量不需要的行然後再拋棄掉。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"數據模擬","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"那好,瞭解了問題的原理,那就要試着解決它了。涉及數據敏感性,我們這邊模擬一下這種情況,構造一些數據來做測試。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"1、創建兩個表:員工表和部門表","attrs":{}}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"/*部門表,存在則進行刪除 */\ndrop table if EXISTS dep;\ncreate table dep(\n    id int unsigned primary key auto_increment,\n    depno mediumint unsigned not null default 0,\n    depname varchar(20) not null default \"\",\n    memo varchar(200) not null default \"\"\n);\n\n/*員工表,存在則進行刪除*/\ndrop table if EXISTS emp;\ncreate table emp(\n    id int unsigned primary key auto_increment,\n    empno mediumint unsigned not null default 0,\n    empname varchar(20) not null default \"\",\n    job varchar(9) not null default \"\",\n    mgr mediumint unsigned not null default 0,\n    hiredate datetime not null,\n    sal decimal(7,2) not null,\n    comn decimal(7,2) not null,\n    depno mediumint unsigned not null default 0\n);\n","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2、創建兩個函數:生成隨機字符串和隨機編號","attrs":{}}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"/* 產生隨機字符串的函數*/\nDELIMITER $\ndrop FUNCTION if EXISTS rand_string;\nCREATE FUNCTION rand_string(n INT) RETURNS VARCHAR(255)\nBEGIN\n    DECLARE chars_str VARCHAR(100) DEFAULT 'abcdefghijklmlopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ';\n    DECLARE return_str VARCHAR(255) DEFAULT '';\n    DECLARE i INT DEFAULT 0;\n    WHILE i = (select id from emp order by id limit 100,1)\norder by a.id limit 25;\n\n/*子查詢獲取偏移4800000條的位置的id,在這個位置上往後取25*/\nSELECT a.empno,a.empname,a.job,a.sal,b.depno,b.depname\nfrom emp a left join dep b on a.depno = b.depno\nwhere a.id >= (select id from emp order by id limit 4800000,1)\norder by a.id limit 25;\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"執行結果","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"執行效率相比之前有大幅的提升:","attrs":{}}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"[SQL]\nSELECT a.empno,a.empname,a.job,a.sal,b.depno,b.depname\nfrom emp a left join dep b on a.depno = b.depno\nwhere a.id >= (select id from emp order by id limit 100,1)\norder by a.id limit 25;\n受影響的行: 0\n時間: 0.106s\n\n[SQL]\nSELECT a.empno,a.empname,a.job,a.sal,b.depno,b.depname\nfrom emp a left join dep b on a.depno = b.depno\nwhere a.id >= (select id from emp order by id limit 4800000,1)\norder by a.id limit 25;\n受影響的行: 0\n時間: 1.541s\n","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2、起始位置重定義","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"記住上次查找結果的主鍵位置,避免使用偏移量 offset","attrs":{}}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"/*記住了上次的分頁的最後一條數據的id是100,這邊就直接跳過100,從101開始掃描表*/\nSELECT a.id,a.empno,a.empname,a.job,a.sal,b.depno,b.depname\nfrom emp a left join dep b on a.depno = b.depno\nwhere a.id > 100 order by a.id limit 25;\n\n/*記住了上次的分頁的最後一條數據的id是4800000,這邊就直接跳過4800000,從4800001開始掃描表*/\nSELECT a.id,a.empno,a.empname,a.job,a.sal,b.depno,b.depname\nfrom emp a left join dep b on a.depno = b.depno\nwhere a.id > 4800000\norder by a.id limit 25;\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"執行結果","attrs":{}}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"[SQL]\nSELECT a.id,a.empno,a.empname,a.job,a.sal,b.depno,b.depname\nfrom emp a left join dep b on a.depno = b.depno\nwhere a.id > 100 order by a.id limit 25;\n受影響的行: 0\n時間: 0.001s\n\n[SQL]\nSELECT a.id,a.empno,a.empname,a.job,a.sal,b.depno,b.depname\nfrom emp a left join dep b on a.depno = b.depno\nwhere a.id > 4800000\norder by a.id limit 25;\n受影響的行: 0\n時間: 0.000s\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"這個效率是最好的,無論怎麼分頁,耗時基本都是一致的,因爲他執行完條件之後,都只掃描了25條數據。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"但是有個問題,只適合一頁一頁的分頁,這樣才能記住前一個分頁的最後Id。如果用戶跳着分頁就有問題了,比如剛剛刷完第25頁,馬上跳到35頁,數據就會不對。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"這種的適合場景是類似百度搜索或者騰訊新聞那種滾輪往下拉,不斷拉取不斷加載的情況。這種延遲加載會保證數據不會跳躍着獲取。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"3、降級策略","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"看了網上一個阿里的dba同學分享的方案:配置limit的偏移量和獲取數一個最大值,超過這個最大值,就返回空數據。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"因爲他覺得超過這個值你已經不是在分頁了,而是在刷數據了,如果確認要找數據,應該輸入合適條件來縮小範圍,而不是一頁一頁分頁。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"這個跟我同事的想法大致一樣:request的時候 如果offset大於某個數值就先返回一個4xx的錯誤。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"小結","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"當晚我們應用上述第三個方案,對offset做一下限流,超過某個值,就返回空值。第二天使用第一種和第二種配合使用的方案對程序和數據庫腳本進一步做了優化。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"合理來說做任何功能都應該考慮極端情況,設計容量都應該涵蓋極端邊界測試。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"另外,該有的限流、降級也應該考慮進去。比如工具多線程調用,在短時間頻率內8000次調用,可以使用計數服務判斷並反饋用戶調用過於頻繁,直接給予斷掉。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"哎,大意了啊,搞了半夜,QA同學不講武德。不過這是很美好的經歷了。","attrs":{}}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章