測試結論
(1)網卡處理大包的實際能力約爲網卡宣稱的80%-90%,比如10G網卡支持的最大流量約爲8Gb/s-9Gb/s,注意單位是bit。
(2) 1G網卡處理小包的能力約爲560K包/秒。
(3) 網卡的帶寬參數指的是單向帶寬。
1 TCP性能
1.1組網
1.2測試
測試方法: sever2 作爲服務端綁定端口9999,server1作爲客戶端與server2建立N個連接。在server1上,每個連接對應一個線程,不停的向服務端發送數據,每次發送10KB數據。在server2上,每個連接對應一個線程,測量接收數據的速率,並通過sar –nDEV 20 1000命令查看每秒處理包的個數。
連接數量 |
總數據傳輸速率 |
包 |
1 |
550 MB/S |
|
2 |
620 MB/S |
|
4 |
900 MB/S |
|
8 |
1000 MB/S |
750 000包/S |
10 |
900 MB/S |
|
15 |
800 MB/S |
|
20 |
740 MB/S |
|
2 UDP性能
2.1 組網
2.2 多個端口
測試方法:
server2綁定N個端口,每個端口對應一個線程。server1啓動N個線程,每個線程向不同的端口發送udp包,包大小爲1400byte。在server2上設置接收緩衝區大小爲255K。
連接數量 |
數據傳輸速率 |
包 |
1 |
560 MB/S |
|
2 |
1000 MB/S |
750 000 包/S |
4 |
1000 MB/S |
|
8 |
830 MB/S |
|
10 |
760 MB/S |
|
15 |
700 MB/S |
|
20 |
700 MB/S |
|
3.3 一個端口
測試方法:
server2綁定1個端口。server1啓動N個線程,每個線程均需同一端口發送udp包,包大小爲1400byte。在server2上設置接收緩衝區大小爲255K。
連接數量 |
數據傳輸速率 |
包(包/s) |
2 |
850 MB/S |
|
4 |
1000 MB/S |
|
8 |
840 MB/S |
|
10 |
770 MB/S |
|
15 |
700 MB/S |
|
20 |
700 MB/S |
|
3 測試三
3.1組網
server-1
root@ubuntu:home# dmesg |grep -i em1
[ 17.417793] ixgbe 0000:03:00.0 em1: detected SFP+: 6
[ 17.549678] ixgbe 0000:03:00.0 em1: NIC Link is Up 10 Gbps, Flow Control: RX/TX
root@ubuntu:home# ethtool em1
Settings for em1:
Supported ports: [ FIBRE ]
Supported link modes: 10000baseT/Full
Supported pause frame use: No
Supports auto-negotiation: No
Advertised link modes: 10000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: No
Speed: 10000Mb/s #網卡帶寬
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: external
Auto-negotiation: off
Supports Wake-on: umbg
Wake-on: g
Current message level: 0x00000007 (7)
drv probe link
Link detected: yes
root@ubuntu:home# lspci |grep -i eth
03:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) #網卡型號
03:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
[root@nqadb home]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1 #每個核有1個超線程,即不支持超線程
Core(s) per socket: 4 #每個CPU有4個核
CPU socket(s): 2 #2個物理CPU
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 44
Stepping: 2
CPU MHz: 1200.000
BogoMIPS: 4265.06
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 8192K
NUMA node0 CPU(s): 0,2,4,6
NUMA node1 CPU(s): 1,3,5,7
server-2
[root@nqadb home]# dmesg |grep -i eth0
bnx2 0000:03:00.0: eth0: Broadcom NetXtreme II BCM5709 1000Base-T (C0) PCI Express found at mem f4000000, IRQ 16, node addr e8:39:35:23:4f:84
[root@nqadb home]# ethtool eth0
Settings for eth0:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
MDI-X: Unknown
Supports Wake-on: g
Wake-on: g
Link detected: yes
[root@nqadb home]# lspci |grep -i eth
03:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
03:00.1 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
04:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
04:00.1 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
[root@nqadb home]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 4
CPU socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 44
Stepping: 2
CPU MHz: 1200.000
BogoMIPS: 4265.06
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 8192K
NUMA node0 CPU(s): 0,2,4,6
NUMA node1 CPU(s): 1,3,5,7
Speed: 10000Mb/s #網卡帶寬
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: external
Auto-negotiation: off
Supports Wake-on: umbg
Wake-on: g
Current message level: 0x00000007 (7)
drv probe link
Link detected: yes
root@ubuntu:home# lspci |grep -i eth
03:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) #網卡型號
03:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
[root@nqadb home]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1 #每個核有1個超線程,即不支持超線程
Core(s) per socket: 4 #每個CPU有4個核
CPU socket(s): 2 #2個物理CPU
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 44
Stepping: 2
CPU MHz: 1200.000
BogoMIPS: 4265.06
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 8192K
NUMA node0 CPU(s): 0,2,4,6
NUMA node1 CPU(s): 1,3,5,7
server-2
[root@nqadb home]# dmesg |grep -i eth0
bnx2 0000:03:00.0: eth0: Broadcom NetXtreme II BCM5709 1000Base-T (C0) PCI Express found at mem f4000000, IRQ 16, node addr e8:39:35:23:4f:84
[root@nqadb home]# ethtool eth0
Settings for eth0:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
MDI-X: Unknown
Supports Wake-on: g
Wake-on: g
Link detected: yes
[root@nqadb home]# lspci |grep -i eth
03:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
03:00.1 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
04:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
04:00.1 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
[root@nqadb home]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 4
CPU socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 44
Stepping: 2
CPU MHz: 1200.000
BogoMIPS: 4265.06
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 8192K
NUMA node0 CPU(s): 0,2,4,6
NUMA node1 CPU(s): 1,3,5,7
3.2 小包測試
測試目的:server-2的網卡處理小包的能力。
測試方法:發送進程位於server-1,接收進程位於server-2,發送數據包大小爲64,UDP,所以實際每個包的大小爲64+8+20=92。
測試結果:1G網卡處理小包的最大能力是560 Kpck/s。
發送者線程數量 |
rxKpck/s |
rxMB/s |
1 |
154 |
9 |
2 |
300 |
18 |
3 |
440 |
26 |
4 |
560 |
35 |
5 |
500 |
30 |
6 |
365 |
22 |
3.3 大包測試
測試目的:server-2網卡處理大包的能力。
測試方法:發送進程位於server-1,接收進程位於server-2,發送數據包大小爲1460,UDP,所以實際每個包的大小爲1460+8+20=1488。
測試結果:1G網卡處理大包的最大能力是900 MB/s。
發送者線程數量 |
rxKpck/s |
rxMB/s |
1 |
80 |
112 |
2 |
80 |
112 |
3 |
80 |
112 |
3.4 網卡上下行測試
測試目的:網卡帶寬是1G,是單向流量,還是上行加下行?
測試方法:
發送進程A位於server-1,向server-2的8000端口發包。
發送進程B位於server-2,向server-1的9000端口發包。
接收進程C位於server-2,在8000端口收包。
接收進程D位於server-1,在9000端口收包。
每個進程只有一個線程,包大小爲1460。UDP,所以實際每個包的大小爲1460+8+20=1488。
在server-2上執行sar命令:
[root@nqadb ~]# sar -n DEV 10 50
04:04:01 PM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s
04:04:11 PM eth0 83772.62 84014.56 123204.50 123560.38 0.00 0.00 0.00
測試結果:網卡的帶寬參數指的是單向帶寬。從sar的結果看出,上下行可以同時達到960Mb/s。