Ceph 壓力測試

ceph 掛載的快設備

[root@node1 ~]# df -h /mnt/rbd/
文件系統        容量  已用  可用 已用% 掛載點
/dev/rbd0        15G  241M   14G    2% /mnt/rbd

安裝壓測工具和查看磁盤 IO 性能工具 fio

[root@node1 ~]# yum install fio sysstat -y

打開一個 shell 終端查看塊設備 IO

[root@node1 ~]# iostat -x 1

查看 rbd 延遲相應時間

[root@node1 ~]# ceph osd perf

4K 隨機寫 -IOPS

[root@node1 ~]# fio -filename=/mnt/rbd/fio.img -direct=1 -iodepth 32 -thread -rw=randwrite -ioengine=libaio -bs=4k -size=200m -numjobs=8 -runtime=60 -group_reporting -name=mytest
mytest: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
...
fio-3.7
Starting 8 threads
mytest: Laying out IO file (1 file / 200MiB)
Jobs: 8 (f=8): [w(8)][100.0%][r=0KiB/s,w=6468KiB/s][r=0,w=1617 IOPS][eta 00m:00s]
mytest: (groupid=0, jobs=8): err= 0: pid=1476994: Mon Dec 28 14:06:20 2020
  write: IOPS=1856, BW=7426KiB/s (7604kB/s)(436MiB/60166msec)
    slat (usec): min=2, max=711779, avg=936.55, stdev=9746.97
    clat (msec): min=5, max=2748, avg=136.87, stdev=102.90
     lat (msec): min=5, max=2748, avg=137.80, stdev=103.57
    clat percentiles (msec):
     |  1.00th=[   36],  5.00th=[   51], 10.00th=[   63], 20.00th=[   78],
     | 30.00th=[   91], 40.00th=[  104], 50.00th=[  117], 60.00th=[  133],
     | 70.00th=[  153], 80.00th=[  178], 90.00th=[  222], 95.00th=[  268],
     | 99.00th=[  477], 99.50th=[  634], 99.90th=[ 1552], 99.95th=[ 1620],
     | 99.99th=[ 2232]
   bw (  KiB/s): min=    7, max= 1852, per=12.54%, avg=930.80, stdev=311.63, samples=953
   iops        : min=    1, max=  463, avg=232.46, stdev=77.94, samples=953
  lat (msec)   : 10=0.01%, 20=0.09%, 50=4.55%, 100=32.60%, 250=56.45%
  lat (msec)   : 500=5.35%, 750=0.69%, 1000=0.03%
  cpu          : usr=0.09%, sys=0.47%, ctx=40077, majf=11, minf=453
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=99.8%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,111691,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=7426KiB/s (7604kB/s), 7426KiB/s-7426KiB/s (7604kB/s-7604kB/s), io=436MiB (457MB), run=60166-60166msec

Disk stats (read/write):
  rbd0: ios=452/111282, merge=0/2303, ticks=7520/13687202, in_queue=7354129, util=99.88%

4K 隨機讀 -IOPS

[root@node1 ~]# fio -filename=/mnt/rbd/fio.img -direct=1 -iodepth 32 -thread -rw=randread -ioengine=libaio -bs=4k -size=200m -numjobs=8 -runtime=60 -group_reporting -name=mytest

4K 隨機讀寫 -IOPS

[root@node1 ~]# fio -filename=/mnt/rbd/fio.img -direct=1 -iodepth 32 -thread -rw=randrw -rwmixread=70 -ioengine=libaio -bs=4k -size=200m -numjobs=8 -runtime=60 -group_reporting -name=mytest

1M 順序寫 -吞吐

[root@node1 ~]# fio -filename=/mnt/rbd/fio.img -direct=1 -iodepth 32 -thread -rw=write -ioengine=libaio -bs=1M -size=200m -numjobs=8 -runtime=60 -group_reporting -name=mytest

RBD bench 測試

  1. 4K 隨機寫

[root@node1 ~]# rbd bench --io-size 4K --io-threads 16 --io-total 200M  --io-pattern rand --io-type write pool_demo/demo.img
bench  type write io_size 4096 io_threads 16 bytes 209715200 pattern random
  SEC       OPS   OPS/SEC   BYTES/SEC
    1      6688   6650.80  27241665.77
    2      7392   3629.60  14866822.47
    3      8656   2889.71  11836231.76
    4     10032   2510.75  10284015.65
    5     11152   2228.70  9128747.38
    6     12720   1206.64  4942405.60
    7     13840   1298.17  5317298.67
    8     15360   1336.79  5475493.34
    9     17072   1405.19  5755659.85
   10     17696   1290.73  5286832.10
   11     21008   1657.93  6790891.51
   12     22704   1775.64  7273029.64
   13     25376   2004.80  8211681.05
   14     25776   1618.15  6627924.07
   15     30128   2523.24  10335193.83
   17     31440   1503.82  6159650.49
   18     31712   1500.83  6147415.57
   19     36928   1928.23  7898016.69
   20     39216   2379.18  9745135.47
   21     39376   1352.84  5541227.15
   22     39744   2046.33  8381765.08
   23     45808   2817.51  11540525.23
   24     48016   2214.94  9072407.70
   25     50944   2361.66  9673361.74
elapsed:    26  ops:    51200  ops/sec:  1917.46  bytes/sec: 7853917.88
  1. 4K 隨機讀

[root@node1 ~]# rbd bench --io-size 4K --io-threads 16 --io-total 200M  --io-pattern rand --io-type read pool_demo/demo.img
  1. 4K 隨機讀寫

[root@node1 ~]# rbd bench --io-size 4K --io-threads 16 --io-total 200M  --io-pattern rand --io-type readwrite --rw-mix-read 70 pool_demo/demo.img
  1. 1M 順序寫

[root@node1 ~]# rbd bench --io-size 1M --io-threads 16 --io-total 200M  --io-pattern seq --io-type write pool_demo/demo.img
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章