Mysql qps 數據統計和分析
1.工作準備
1.1 mysql相關 配置準備
#打開查詢日誌
Set global general_log = 1;
Set global general_log_file =’/data/mysqldata/data/localhost1.log’
注意:log_output 這個參數需要設置爲file,如果爲table就不會記錄到文件了
#打開滿查詢日誌(默認是開的)
Set gloabl log_slow_queries = 1;
#修改滿查詢閥值(默認1s改成0.5s)
Set global long_query_time = 0.5;
#打開未使用索引查詢記錄
Set global log_queries_not_using_indexes = 1;
以上配置是,打開相關日誌記錄的參數,慢查詢,和不適用索引查詢的sql
統計QPS
2.1 QPS= select + update+ insert+delete
這裏我們只統計數據庫每秒執行的select,update,insert和delete數量(增量數據)。
因爲數據庫裏面這些參數對應的值都是當前值, 而不是每秒的增量,所以,我們需要自己寫腳本去搜集
Select ------>Com_select
Update ------>Com_update
Insert ------>Com_insert
Delete ------>Com_delete
2.2 去qps增量值保存到文件
這裏我們採用mysqladmin -u7roaddba -p -r -i 1 exttended-status來動態增量獲取上面對應參數的值。
這裏-r 是去增量值,-i 是刷新取值頻率,這裏是每秒
在mysql服務器,系統層面執行
mysqladmin -u7roaddba -p -r -i 1 ext |awk -F"|" "BEGIN{ count=0; }"'{ if($2 ~ /Variable_name/ && ++count == 1){\
print "---------- MySQL Command Status -- --";\
print "---Time---|select insert update delete";\
}\
else if ($2 ~ /Com_select /){com_select=$3;}\
else if ($2 ~ /Com_insert /){com_insert=$3;}\
else if ($2 ~ /Com_update /){com_update=$3;}\
else if ($2 ~ /Com_delete /){com_delete=$3;}\
else if ($2 ~ /Uptime / && count = 2){\
printf(" %s ",strftime("%Y-%m-%d %H:%M:%S"));\
printf(",%6d ,%6d ,%6d ,%6d\n",com_select,com_insert,com_update,com_delete);\
}}' |tee -a a.csv
這裏會吧上面的幾個指標值,按照‘,’ 分割來寫入文件a.csv中保存下來
看下a.csv中的內容
3,收集慢查詢和general log日誌文件
3.1 將服務器的general log日誌和慢查詢日誌下載到本地保存
如:
這裏的慢查詢日誌名是 slow-query.log
General log 日誌名是 localhost.log
差不多5個小時,可能general log的日誌文件就會填充到1G 大小以上
4,處理和分析階段
4.1 將qps 抓取的數據,導入到本地空閒的mysql服務器中
這裏我們導入到test庫中
創建表
Create table qps(ctime datetime,
svalue int(11),
uvalue int(11),
ivalue int(11),
dvalue int(11));
將蒐集qps的文件a.csv 複製到mysql的數據目錄下的test目錄中
Mysql> load data infile ‘a.csv’ into qps fields terminated by ‘,’;
4.2 處理qps 表數據
這裏我們通過select查詢,獲取分鐘的數據量和分鐘內平均每秒的數據量,然後將取得的數據導出到外部csv格式文件:
select ctime as ‘統計時間’,
sum(svalue) as ‘select/分鐘’,
sum(svalue) div 60 as ‘select/秒’,
sum(ivalue) as ‘insrt/分鐘’,
sum(ivalue) div 60 as ‘insert/秒’,
sum(uvalue) as ‘update/分鐘’,
sum(uvalue) div 60 as ‘update/秒’,
sum(dvalue) as ‘delete/分鐘’,
sum(dvalue) div 60 as ‘delete/秒’,
(sum(svalue)+sum(ivalue)+sum(uvalue)+sum(dvalue)) as ‘總量/分鐘’,
(sum(svalue)+sum(ivalue)+sum(uvalue)+sum(dvalue)) div 60 as ‘總量/秒’
from fengcelog group by date_format(ctime,'%Y-%m-%d %H:%i:00')
into outfile 'db_log_hgtest_0001_20140815.csv' fields terminated by ','
最後到處csv文件,在windows中 excel打開,如下
這個就很形象的展示了,統計時間,各種指標的值了
4.3 general log 日誌處理分析
這裏使用mysqlsla 去分析genernal 日誌文件
a.安裝mysqlsla
.下載 mysqlsla
[root@localhost tmp]# wget http://hackmysql.com/scripts/mysqlsla-2.03.tar.gz
--19:45:45-- http://hackmysql.com/scripts/mysqlsla-2.03.tar.gz
Resolving hackmysql.com... 64.13.232.157
Connecting to hackmysql.com|64.13.232.157|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 33674 (33K) [application/x-tar]
Saving to: `mysqlsla-2.03.tar.gz.2'
100%[===========================================================================================>] 33,674 50.2K/s in 0.7s
19:45:47 (50.2 KB/s) - `mysqlsla-2.03.tar.gz.2' saved [33674/33674]
b..解壓
[root@localhost tmp]# tar -zxvf mysqlsla-2.03.tar.gz
mysqlsla-2.03/
mysqlsla-2.03/Changes
mysqlsla-2.03/INSTALL
mysqlsla-2.03/README
mysqlsla-2.03/Makefile.PL
mysqlsla-2.03/bin/
mysqlsla-2.03/bin/mysqlsla
mysqlsla-2.03/META.yml
mysqlsla-2.03/lib/
mysqlsla-2.03/lib/mysqlsla.pm
mysqlsla-2.03/MANIFEST
[root@localhost tmp]# cd mysqlsla-2.03
[root@localhost mysqlsla-2.03]# ls
bin Changes INSTALL lib Makefile.PL MANIFEST META.yml README
c.執行perl腳本檢查包依賴關係
[root@localhost mysqlsla-2.03]# perl Makefile.PL
Checking if your kit is complete...
Looks good
Writing Makefile for mysqlsla
d.安裝
[root@localhost mysqlsla-2.03]# make && make install;
cp lib/mysqlsla.pm blib/lib/mysqlsla.pm
cp bin/mysqlsla blib/script/mysqlsla
/usr/bin/perl "-MExtUtils::MY" -e "MY->fixin(shift)" blib/script/mysqlsla
Manifying blib/man3/mysqlsla.3pm
Installing /usr/lib/perl5/site_perl/5.8.8/mysqlsla.pm
Installing /usr/share/man/man3/mysqlsla.3pm
Installing /usr/bin/mysqlsla
Writing /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/auto/mysqlsla/.packlist
Appending installation info to /usr/lib/perl5/5.8.8/i386-linux-thread-multi/perllocal.pod
[root@localhost mysqlsla-2.03]#
使用mysqlsla 分析general log
mysqlsla -lt general --top 200 general_log.txt > top_200.txt
這裏-lt 是指定後面要分析的日誌類型,這裏我們使用-lt general
展示下分析後的top_200.txt文件內容
Report for general logs: game_gerernal.log
1.56M queries total, 4.17k unique
Sorted by 'c_sum'
____________________________001 ___
Count : 735.58k (47.22%)
Connection ID : 2528823
Database :
Users :
[email protected] : 87.93% (646784) of query, 86.83% (1352745) of all users
@ : 12.07% (88798) of query, 13.07% (203592) of all users
Query abstract:
SET autocommit=N
Query sample:
SET autocommit=1
___________________________________________________ 002 ___
Count : 49.78k (3.20%)#這裏統計sql執行了4.978萬次
Connection ID : 2528823 #這裏是connect id,從show processlist中可以找到
[email protected] : 84.11% (41868) of query, 86.83% (1352745) of all users
@ : 15.89% (7910) of query, 13.07% (203592) of all users
Query abstract:
SELECT * FROM t_u_newbie WHERE username='S' AND site='S' #執行的語句(通用寫法)
Query sample:
select * from t_u_newbie where `username`='449' and `site`='hgtest_0001'#具體sql示例
_____________________________________________ 003 ___
Count : 43.84k (2.81%)
Connection ID : 2528826
[email protected] : 99.46% (43602) of query, 86.83% (1352745) of all users
@ : 0.54% (238) of query, 13.07% (203592) of all users
Query abstract:
SELECT * FROM t_u_item WHERE userid = N AND (bagtype = N OR bagtype = N) AND isexist = N AND place != -N
Query sample:
select * from t_u_item where userId = 1002011 and (bagType = 2 or bagType = 18) and IsExist = 1 and place != -1
上面 簡單的展示了general log中記錄執行次數最多的3個sql
4.4 ,慢查詢日誌分析
同樣適用mysqlsla可以分析慢查詢中出現頻率最高的sql,
當然也要注意關注那些沒有適應索引查詢的sql,(有些很明顯字段是有加索引的,但是還出現在慢查詢日誌中)
root]#mysqlsla -lt slow -top 200 query_slow.log > slow.log
1.04k queries total, 7 unique
Sorted by 't_sum'
Grand Totals: Time 0 s, Lock 0 s, Rows sent 18, Rows Examined 38.52
_____________________________________ 001 ___
Count : 865 (82.93%)
Time : 213.588 ms total, 247 avg, 188 to 388 max (57.17%)
95% of Time : 197.697 ms total, 241 avg, 188 to 351 max
Lock Time (s) : 79.913 ms total, 92 avg, 59 to 162 max (57.97%)
95% of Lock : 73.55 ms total, 90 avg, 59 to 134 max
Rows sent : 0 avg, 0 to 0 max (0.00%)
Rows examined : 33 avg, 33 to 33 max (74.10%)
Database : db_game_hgtest_0002
Users :
g_hgtest_0002@ 10.34.148.18 : 100.00% (865) of query, 99.81% (1041) of all users
Query abstract:
SET timestamp=N; UPDATE t_p_macrodrop SET currentcount = maxcount, resetdate = now() WHERE timestampdiff(hour,resetdate,now()) >= time;
Query sample:
SET timestamp=1408031756;
update t_p_macrodrop set `currentCount` = `maxCount`, `resetDate` = now() where TIMESTAMPDIFF(HOUR,resetDate,now()) >= time;
_______________________________ 002 ___
Count : 173 (16.59%)
Time : 153.938 ms total, 890 avg, 826 to 989 max (41.20%)
95% of Time : 145.608 ms total, 888 avg, 826 to 906 max #執行消耗的總時間,平均時間,和最慢的一條執行時間
Lock Time (s) : 57.457 ms total, 332 avg, 310 to 365 max (41.68%) #鎖時間
95% of Lock : 54.246 ms total, 331 avg, 310 to 349 max
Rows sent : 0 avg, 0 to 0 max (0.00%)
Rows examined : 41 avg, 41 to 41 max (18.41%)
Database :
Users :
g_hgtest_0002@ 10.34.148.18 : 100.00% (173) of query, 99.81% (1041) of all users
Query abstract:
SET timestamp=N; SELECT *, t_mv.membercount membercount_ FROM t_u_guild g, t_u_combine_load_guild c, (SELECT COUNT( mv.guildid) membercount,mv.guildid FROM v_guild_member_list mv GROUP BY mv.guildid ) t_mv WHERE g.guildid = t_mv.guildid AND g.guildid=c.guildid AND g.state !=N AND c.states = N LIMIT N;
Query sample:
SET timestamp=1408031999;
SELECT *, t_mv.memberCount memberCount_ FROM t_u_guild g, t_u_combine_load_guild c, (SELECT COUNT( mv.guildID) memberCount,mv.guildID FROM v_guild_member_list mv GROUP BY mv.guildID ) t_mv WHERE g.guildID = t_mv.guildID AND g.guildID=c.guildID AND g.state !=3 AND c.states = 1 limit 100;
4,查找數據庫中最大的表
4.1 根據最多數據的表,查看剛剛分析的general log中的某個insert 次數是否相對應(insert 次數多導致表大,這裏insert次數並不是跟表數據量相等)
掃描大表:
Select TABLE_SCHEMA,TABLE_NAME,TABLE_ROWS from information_schema.tables where TABLE_SCHEMA=’TABLENAME’ order by TABLE_ROWS desc limit 10;
Qps分析:
1,通過mysqladmin 每秒獲取數據庫的com_select com_update等值
2,打開genenral log, 保存下來,使用mysqlsla統計
3,打開慢查詢日誌,設定滿查詢時間爲1秒, 分析慢查詢
4,統計數據庫中每個表的行數
5,跟蹤一個玩家的所有sql
6,壓測期間,讓開發記住當天的操作