Spring Cloud Gateway压测(wrk、k8s、nginx)

压测环境

K8S容器中安装wrk,scg(Spring Cloud Gateway)发布在k8s容器中,使用nginx访问scg,scg转发到nginx的html页面。

k8s容器配置:4核、8G内存

wrk:https://github.com/wg/wrk

jvm配置:基本不占用内存,配置1G即可。

scg配置:

spring:
  application:
    name: scg-test
server:
  servlet:
    context-path: /
  tomcat:
    accept-count: 200 #连接数
    connection-timeout: 3s #连接超时
spring:
  cloud:
    gateway:
      httpclient:
        connect-timeout: 3000
        response-timeout: 3s
      routes:
        - id: r_maxtest
          uri: http://192.168.0.184 # 跳转到nginx本地页面
          predicates:
            - Path=/gwmanager/**
          filters:
            - StripPrefix=1

wrk用法

./wrk --latency -t8 -c100 -d10s http://192.168.0.184:30001/gwmanager
  • --latency:显示延时分布
  • -t:启动线程数,一般为cpu核*2,可以根据IO或cpu密集型进行调整
  • -c: 并发数,平分到每个线程中,熟练不能大于可以用TCP端口数
  • -d: 持续请求时间
Running 10s test @ http://192.168.0.184:30001/gwmanager
  8 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    27.41ms   28.46ms  89.75ms   75.26%
    Req/Sec   626.35     70.19     1.22k    86.62%
  Latency Distribution
     50%    6.71ms
     75%   54.47ms
     90%   79.40ms
     99%   83.89ms
  49908 requests in 10.01s, 7.19MB read
Requests/sec:   4985.86
Transfer/sec:    735.26KB
  • Latency:响应时间

  • Req/Sec:单个线程处理请求数

  • Avg:平均值

  • Stdev:标准差,值越大说明数据分布均匀,可能是机器或服务性能不稳定导致。

  • Max:最大值

  • +/- Stdev:正负标准差比例,差值比标准差大或小的数据比率

  • Latency Distribution:延时分布多少ms一下请求数比例

  • Requests/sec:平均每秒处理请求数

  • Transfer/sec:平均每秒传输数据量

测试结果与分析

持续时间测试

[root@k8s-master-yace wrk]# ./wrk --latency -t8 -c100 -d10s http://192.168.0.184:30001/gwmanager
Running 10s test @ http://192.168.0.184:30001/gwmanager
  8 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    27.41ms   28.46ms  89.75ms   75.26%
    Req/Sec   626.35     70.19     1.22k    86.62%
  Latency Distribution
     50%    6.71ms
     75%   54.47ms
     90%   79.40ms
     99%   83.89ms
  49908 requests in 10.01s, 7.19MB read
Requests/sec:   4985.86
Transfer/sec:    735.26KB
[root@k8s-master-yace wrk]# ./wrk --latency -t8 -c100 -d1m http://192.168.0.184:30001/gwmanager
Running 1m test @ http://192.168.0.184:30001/gwmanager
  8 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    27.98ms   29.04ms  95.57ms   74.34%
    Req/Sec   613.72     46.70   787.00     70.65%
  Latency Distribution
     50%    6.52ms
     75%   57.97ms
     90%   79.76ms
     99%   84.23ms
  293262 requests in 1.00m, 42.23MB read
Requests/sec:   4886.49
Transfer/sec:    720.58KB

结论:在并发相同的情况的,持续时间不影响rps。

并发量测试

[root@k8s-master-yace wrk]# ./wrk --latency -t8 -c100 -d10s http://192.168.0.184:30001/gwmanager
Running 10s test @ http://192.168.0.184:30001/gwmanager
  8 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    27.41ms   28.46ms  89.75ms   75.26%
    Req/Sec   626.35     70.19     1.22k    86.62%
  Latency Distribution
     50%    6.71ms
     75%   54.47ms
     90%   79.40ms
     99%   83.89ms
  49908 requests in 10.01s, 7.19MB read
Requests/sec:   4985.86
Transfer/sec:    735.26KB
[root@k8s-master-yace wrk]# ./wrk --latency -t8 -c200 -d10s http://192.168.0.184:30001/gwmanager
Running 10s test @ http://192.168.0.184:30001/gwmanager
  8 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    41.65ms   32.40ms 103.63ms   47.76%
    Req/Sec   630.29     72.27     1.11k    75.25%
  Latency Distribution
     50%   44.15ms
     75%   82.33ms
     90%   87.58ms
     99%   93.23ms
  50215 requests in 10.01s, 7.23MB read
Requests/sec:   5016.90
Transfer/sec:    739.90KB
[root@k8s-master-yace wrk]# ./wrk --latency -t8 -c500 -d10s http://192.168.0.184:30001/gwmanager
Running 10s test @ http://192.168.0.184:30001/gwmanager
  8 threads and 500 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   100.10ms   43.26ms 435.69ms   76.63%
    Req/Sec   611.35    105.46     2.14k    83.62%
  Latency Distribution
     50%   99.54ms
     75%  105.39ms
     90%  180.69ms
     99%  198.63ms
  48697 requests in 10.02s, 7.02MB read
Requests/sec:   4861.25
Transfer/sec:    717.47KB
[root@k8s-master-yace wrk]# ./wrk --latency -t8 -c1000 -d10s http://192.168.0.184:30001/gwmanager
Running 10s test @ http://192.168.0.184:30001/gwmanager
  8 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   202.90ms   65.88ms 669.54ms   63.63%
    Req/Sec   597.58    130.27     1.10k    70.12%
  Latency Distribution
     50%  199.57ms
     75%  209.78ms
     90%  297.11ms
     99%  393.08ms
  47606 requests in 10.02s, 6.87MB read
Requests/sec:   4749.34
Transfer/sec:    701.98KB

结论:随着并发量的提升,rps有下降的趋势,响应时间逐步提升。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章