Spring Cloud Gateway壓測(wrk、k8s、nginx)

壓測環境

K8S容器中安裝wrk,scg(Spring Cloud Gateway)發佈在k8s容器中,使用nginx訪問scg,scg轉發到nginx的html頁面。

k8s容器配置:4核、8G內存

wrk:https://github.com/wg/wrk

jvm配置:基本不佔用內存,配置1G即可。

scg配置:

spring:
  application:
    name: scg-test
server:
  servlet:
    context-path: /
  tomcat:
    accept-count: 200 #連接數
    connection-timeout: 3s #連接超時
spring:
  cloud:
    gateway:
      httpclient:
        connect-timeout: 3000
        response-timeout: 3s
      routes:
        - id: r_maxtest
          uri: http://192.168.0.184 # 跳轉到nginx本地頁面
          predicates:
            - Path=/gwmanager/**
          filters:
            - StripPrefix=1

wrk用法

./wrk --latency -t8 -c100 -d10s http://192.168.0.184:30001/gwmanager
  • --latency:顯示延時分佈
  • -t:啓動線程數,一般爲cpu核*2,可以根據IO或cpu密集型進行調整
  • -c: 併發數,平分到每個線程中,熟練不能大於可以用TCP端口數
  • -d: 持續請求時間
Running 10s test @ http://192.168.0.184:30001/gwmanager
  8 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    27.41ms   28.46ms  89.75ms   75.26%
    Req/Sec   626.35     70.19     1.22k    86.62%
  Latency Distribution
     50%    6.71ms
     75%   54.47ms
     90%   79.40ms
     99%   83.89ms
  49908 requests in 10.01s, 7.19MB read
Requests/sec:   4985.86
Transfer/sec:    735.26KB
  • Latency:響應時間

  • Req/Sec:單個線程處理請求數

  • Avg:平均值

  • Stdev:標準差,值越大說明數據分佈均勻,可能是機器或服務性能不穩定導致。

  • Max:最大值

  • +/- Stdev:正負標準差比例,差值比標準差大或小的數據比率

  • Latency Distribution:延時分佈多少ms一下請求數比例

  • Requests/sec:平均每秒處理請求數

  • Transfer/sec:平均每秒傳輸數據量

測試結果與分析

持續時間測試

[root@k8s-master-yace wrk]# ./wrk --latency -t8 -c100 -d10s http://192.168.0.184:30001/gwmanager
Running 10s test @ http://192.168.0.184:30001/gwmanager
  8 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    27.41ms   28.46ms  89.75ms   75.26%
    Req/Sec   626.35     70.19     1.22k    86.62%
  Latency Distribution
     50%    6.71ms
     75%   54.47ms
     90%   79.40ms
     99%   83.89ms
  49908 requests in 10.01s, 7.19MB read
Requests/sec:   4985.86
Transfer/sec:    735.26KB
[root@k8s-master-yace wrk]# ./wrk --latency -t8 -c100 -d1m http://192.168.0.184:30001/gwmanager
Running 1m test @ http://192.168.0.184:30001/gwmanager
  8 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    27.98ms   29.04ms  95.57ms   74.34%
    Req/Sec   613.72     46.70   787.00     70.65%
  Latency Distribution
     50%    6.52ms
     75%   57.97ms
     90%   79.76ms
     99%   84.23ms
  293262 requests in 1.00m, 42.23MB read
Requests/sec:   4886.49
Transfer/sec:    720.58KB

結論:在併發相同的情況的,持續時間不影響rps。

併發量測試

[root@k8s-master-yace wrk]# ./wrk --latency -t8 -c100 -d10s http://192.168.0.184:30001/gwmanager
Running 10s test @ http://192.168.0.184:30001/gwmanager
  8 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    27.41ms   28.46ms  89.75ms   75.26%
    Req/Sec   626.35     70.19     1.22k    86.62%
  Latency Distribution
     50%    6.71ms
     75%   54.47ms
     90%   79.40ms
     99%   83.89ms
  49908 requests in 10.01s, 7.19MB read
Requests/sec:   4985.86
Transfer/sec:    735.26KB
[root@k8s-master-yace wrk]# ./wrk --latency -t8 -c200 -d10s http://192.168.0.184:30001/gwmanager
Running 10s test @ http://192.168.0.184:30001/gwmanager
  8 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    41.65ms   32.40ms 103.63ms   47.76%
    Req/Sec   630.29     72.27     1.11k    75.25%
  Latency Distribution
     50%   44.15ms
     75%   82.33ms
     90%   87.58ms
     99%   93.23ms
  50215 requests in 10.01s, 7.23MB read
Requests/sec:   5016.90
Transfer/sec:    739.90KB
[root@k8s-master-yace wrk]# ./wrk --latency -t8 -c500 -d10s http://192.168.0.184:30001/gwmanager
Running 10s test @ http://192.168.0.184:30001/gwmanager
  8 threads and 500 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   100.10ms   43.26ms 435.69ms   76.63%
    Req/Sec   611.35    105.46     2.14k    83.62%
  Latency Distribution
     50%   99.54ms
     75%  105.39ms
     90%  180.69ms
     99%  198.63ms
  48697 requests in 10.02s, 7.02MB read
Requests/sec:   4861.25
Transfer/sec:    717.47KB
[root@k8s-master-yace wrk]# ./wrk --latency -t8 -c1000 -d10s http://192.168.0.184:30001/gwmanager
Running 10s test @ http://192.168.0.184:30001/gwmanager
  8 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   202.90ms   65.88ms 669.54ms   63.63%
    Req/Sec   597.58    130.27     1.10k    70.12%
  Latency Distribution
     50%  199.57ms
     75%  209.78ms
     90%  297.11ms
     99%  393.08ms
  47606 requests in 10.02s, 6.87MB read
Requests/sec:   4749.34
Transfer/sec:    701.98KB

結論:隨着併發量的提升,rps有下降的趨勢,響應時間逐步提升。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章