-
Notifications
You must be signed in to change notification settings - Fork 728
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #846 from ywc689/v1.9.2
Release v1.9.2
- Loading branch information
Showing
10 changed files
with
15,325 additions
and
6 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,11 +1,53 @@ | ||
#!/bin/sh - | ||
#!/bin/sh | ||
# program: dpvs | ||
# Jan 4, 2022 | ||
# | ||
# Rebase v1.8.12 to v1.9.0 | ||
# Jul 19, 2022 | ||
# | ||
|
||
export VERSION=1.9 | ||
export RELEASE=1.alpha | ||
export RELEASE=2 | ||
|
||
echo $VERSION-$RELEASE | ||
|
||
## Features | ||
#* Dpvs: Add ipset framework and 12 set types. | ||
#* Dpvs: Add an ipset based tc classifier -- tc_cls_ipset. | ||
#* Dpvs: Add l2/l3/l4 header parse apis for mbuf. | ||
#* Dpvs: Add config option "dedicated_queues" for bonding mode 4 (802.3ad). | ||
#* Dpvs: Isolate kni ingress traffic using kni address flow. | ||
#* Dpvs: Update rss reta table according to configured workers after device bootup. | ||
#* Dpvs: Expire quiescent connections after realserver was removed. | ||
#* Dpvs: Make async log mempool size and log timestamp configurable. | ||
#* Dpvs: Enable dpvs log only when macro CONFIG_DPVS_LOG is defined. | ||
#* Dpvs: Make debug fields in dp_vs_conn configurable for memory optimization. | ||
#* Toa: Support linux kernel verison v5.7.0+. | ||
#* Keepalived: Add UDP_CHECK health checker. | ||
#* Test: Add flame graph scripts for performance tests. | ||
#* Test: Add performance benchmark tests of DPVS v1.9.2. | ||
#* Docs: Update some docs. | ||
# | ||
## Bugfix | ||
#* Dpvs: Fix a crash problem when timer is scheduled from within another timer's callback. | ||
#* Dpvs: Fix a crash problem caused by incorrect mbuf pointer in IPv4 fragmentation. | ||
#* Dpvs: Fix a crash problem caused by using unsafe list macro in conhash. | ||
#* Dpvs: Fix the fullnat tcp forwarding failure problem when defer_rs_syn enabled. | ||
#* Dpvs: Fix the ipvs rr/wrr/wlc problem of uneven load distribution across dests. | ||
#* Dpvs: Fix the weight ratio update problem in conhash schedule algorithm. | ||
#* Dpvs: Send tcp rst to both ends when snat conneciton expired. | ||
#* Dpvs: Use unified dest validation in mh scheduling algorithm. | ||
#* Dpvs: Fix the icmp sending failure problem when no route cached in mbuf. | ||
#* Dpvs: Fix the compiling failure problem when icmp debug is enabled. | ||
#* Dpvs: Fix the icmpv6 sending failure problem caused by incorrect mtu. | ||
#* Dpvs: Fix icmpv6 checksum error caused by incorrect payload length endian in ipv6 header. | ||
#* Dpvs: Fix the checksum problem caused by incorrect netif interface. | ||
#* Dpvs: Fix the bonding mode 4 problem caused by LACP failure. | ||
#* Dpvs: Fix the ipv6 neighbour ring full problem to kni isolated lcore. | ||
#* Dpvs: Fix the list/edit problem for MATCH type service (snat service). | ||
#* Dpvs: Fix incorrect oifname typo in MATCH type. | ||
#* Dpvs: Fix the dpvs worker blocking problem when async log is enabled. | ||
#* Dpvs: Fix some memory overflow problems when log messages are truncated. | ||
#* Dpvs: Fix the msg sequence duplicated problem in ipvs allow list. | ||
#* Dpvs: Fix the incorrect uoa client source port problem in fnat64. | ||
#* Uoa: Fix uoa data parse problem of ipv4 opp, and add a module parameter to parse uoa data in netfilter forward chain. | ||
#* Keepalived: Fix an exit problem when reload. | ||
#* Keepalived: Fix some compile problems found on ubuntu. | ||
#* Ipvsadm: Use correct flag in listing ipvs connections. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,22 @@ | ||
* TCP CPS/CC Tests | ||
workers,cps;ipackets/pps,opackets/pps,ibytes/Bps,obytes/Bps;connections;pktRx,pktTx,bitsRx,bitsTx,dropTx | ||
1,200000;1211533,1211847,99143458,102396220;1472000;600020,599988,393618488,382378808,0 | ||
2,360000;2166961,2166955,177320954,183100299;2701000;1072119,1076034,703360424,685830112,0 | ||
4,660000;3960726,3960788,324114391,334680450;4941000;1980045,1980054,1298916032,1261958232,0 | ||
8,1060000;6360626,6360628,520511025,537472046;7949000;3180092,3180068,2086137680,2026768232,0 | ||
10,1240000;7440784,7440727,608903706,628741279;9299000;3718514,3719316,2439334056,2370499504,0 | ||
16,1070000;6420639,6420548,525422150,542537169;8019000;3210000,3209989,2105751088,2045839664,0 (cross-numa-node) | ||
|
||
* UDP PPS Tests | ||
workers,connections;ipackets/pps,opackets/pps,ibytes/Bps,obytes/Bps;pktRx,pktTx,bitsRx,bitsTx,dropTx | ||
1,2900;2900244,2900221,174014668,174013684;1449993,1450000,695996816,498800000,0 | ||
2,5000;5000418,5000370,300024968,300022497;2499954,2500000,1199978096,860000000,0 | ||
4,9200;9201066,9201048,552063906,552062986;4486101,4600001,2153329128,1582400344,0 | ||
8,9450;9451027,9451004,567061568,567060365;4723923,4724932,2267483216,1625376608,0 | ||
|
||
* Throughput Tests | ||
workers,connections;ipackets/pps,opackets/pps,ibytes/Bps,obytes/Bps;pktRx,pktTx,bitsRx,bitsTx,dropTx | ||
1,1000;1424608,1424599,1215824068,1215816616;712263,712285,4866168760,4860632840,0 | ||
2,1000;1424748,1424738,1215947746,1215939706;712247,712263,4866065328,4860482712,0 | ||
4,1000;1424876,1424870,1216052235,1216047912;712258,712238,4866134600,4860312112,0 | ||
8,1000;1424788,1424787,1215971428,1215970249;712261,712260,4866160976,4860462240,0 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,257 @@ | ||
DPVS v1.9.2 Performance Tests | ||
=== | ||
|
||
* [Test Platform](#platform) | ||
* [TCP CPS/CC Tests](#cps/cc) | ||
* [UDP PPS Tests](#pps) | ||
* [Throughput Tests](#throughput) | ||
|
||
|
||
<a id='platform'/> | ||
|
||
# Test Platform | ||
|
||
The performance of DPVS v1.9.2 is examined on two physical servers, one serves as DPVS server, and the other as both backend server(RS) and client(Client). RS and Client take advantages of [dperf](https://github.com/baidu/dperf), a high performance benchmark tool based on DPDK developed by baidu. The dperf server process and dperf client process use isolated NIC interfaces, CPU coers, and hugepage memory in order to run both processes on a single node. | ||
|
||
### DPVS Server | ||
|
||
+ CPU: Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz, 2 Sockets, 12 Cores per Socket, 2 Threads per Core | ||
+ Memory: 188G Bytes | ||
+ NIC: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 | ||
+ OS: Centos 7.6 | ||
+ DPVS: v1.9.2 | ||
|
||
### Dperf Server/Client | ||
|
||
+ CPU: Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz, 2 Sockets, 10 Cores per Socket, 2 Threads per Core | ||
+ Memory: 62G Bytes | ||
+ NIC: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 | ||
+ OS: Centos 7.6 | ||
+ Dperf: v1.2.0 | ||
|
||
<a id='cps/cc'/> | ||
|
||
# TCP CPS/CC Tests | ||
|
||
CPS(Connections per Second) and CC (Concurrent Connections) tests are performed by using the extreme small sized packets (payload_size=1) and variable `cps` of dperf clients. We gradually increase the `cps` of dperf clients until packet loss is seen in DPVS, and then the corresponding CPS and CC are the performance data that we need. | ||
|
||
### Dperf Client | ||
|
||
``` | ||
mode client | ||
cpu 8-15 | ||
slow_start 60 | ||
tx_burst 128 | ||
launch_num 10 | ||
payload_size 1 | ||
duration 90s | ||
protocol tcp | ||
cps [refer to performance data] | ||
port 0000:04:00.0 192.168.0.30 192.168.7.254 | ||
client 192.168.3.0 50 | ||
server 192.168.5.1 8 | ||
listen 80 1 | ||
``` | ||
|
||
### Dperf Server | ||
|
||
``` | ||
mode server | ||
cpu 0-7 | ||
tx_burst 128 | ||
payload_size 1 | ||
duration 100d | ||
port 0000:04:00.1 192.168.1.30 192.168.7.254 | ||
client 192.168.0.28 1 | ||
client 192.168.1.28 1 | ||
client 192.168.1.30 1 | ||
client 192.168.3.0 200 | ||
server 192.168.6.100 8 | ||
listen 80 1 | ||
``` | ||
|
||
### DPVS | ||
|
||
+ Service: 192.168.5.[1-8]:80, TCP, FullNAT, rr, syn-proxy off | ||
+ Local IP: 192.168.3.[100-149] | ||
|
||
``` | ||
TCP 192.168.5.1:80 rr | ||
-> 192.168.6.100:80 FullNat 100 0 4 | ||
-> 192.168.6.101:80 FullNat 100 0 4 | ||
-> 192.168.6.102:80 FullNat 100 0 2 | ||
-> 192.168.6.103:80 FullNat 100 0 1 | ||
-> 192.168.6.104:80 FullNat 100 0 0 | ||
-> 192.168.6.105:80 FullNat 100 0 0 | ||
-> 192.168.6.106:80 FullNat 100 0 1 | ||
-> 192.168.6.107:80 FullNat 100 0 2 | ||
TCP 192.168.5.2:80 rr | ||
-> 192.168.6.100:80 FullNat 100 0 1 | ||
-> 192.168.6.101:80 FullNat 100 0 2 | ||
... | ||
... | ||
``` | ||
|
||
### Performance Data | ||
|
||
| workers | cps | ipackets/pps | opackets/pps | ibytes/Bps | obytes/Bps | connections | dperf:pktRx | dperf:pktTx | dperf:bitsRx | dperf:bitsTx | dperf:dropTx | | ||
| ------- | --------- | ------------ | ------------ | ----------- | ----------- | ----------- | ----------- | ----------- | ------------- | ------------- | ------------ | | ||
| 1 | 200,000 | 1,211,533 | 1,211,847 | 99,143,458 | 102,396,220 | 1,472,000 | 600,020 | 599,988 | 393,618,488 | 382,378,808 | 0 | | ||
| 2 | 360,000 | 2,166,961 | 2,166,955 | 177,320,954 | 183,100,299 | 2,701,000 | 1,072,119 | 1,076,034 | 703,360,424 | 685,830,112 | 0 | | ||
| 4 | 660,000 | 3,960,726 | 3,960,788 | 324,114,391 | 334,680,450 | 4,941,000 | 1,980,045 | 1,980,054 | 1,298,916,032 | 1,261,958,232 | 0 | | ||
| 8 | 1,060,000 | 6,360,626 | 6,360,628 | 520,511,025 | 537,472,046 | 7,949,000 | 3,180,092 | 3,180,068 | 2,086,137,680 | 2,026,768,232 | 0 | | ||
| 10 | 1,240,000 | 7,440,784 | 7,440,727 | 608,903,706 | 628,741,279 | 9,299,000 | 3,718,514 | 3,719,316 | 2,439,334,056 | 2,370,499,504 | 0 | | ||
| 16 | 1,070,000 | 6,420,639 | 6,420,548 | 525,422,150 | 542,537,169 | 8,019,000 | 3,210,000 | 3,209,989 | 2,105,751,088 | 2,045,839,664 | 0 | | ||
|
||
|
||
![CPS/CC](./pics/tcp_cps.png) | ||
|
||
In case of 8-workers, DPVS v1.9.2 can establish **1,000,000 new connections per second**, and hold **8,000,000 concurrent connections** in the meanwhile. The performance gains approximately linearly when worker number is below 10. But an obvious performance loss is seen in 16-workers. One reason is that DPVS doesn't eliminate all racing conditions in datapath, and the problem gets worse with the increase of worker number. Besides, some DPVS workers are assigned to the CPU cores on NUMA socket different from that of NIC when running with 16-workers. Our DPVS server only has 12 CPU cores available per socket. | ||
|
||
Let's make a deep insight into the `cpu-clock` events of DPVS with Linux performance analysis tool `perf`. We build DPVS with debug info and then run CPC/CC tests with 1-worker and 8-workers, with dperf `cps` configured to be 100,000 and 600,000 respectively. The performance flame graphs are shown below. | ||
|
||
![perf-flame-worker-1](./pics/worker1.svg) | ||
|
||
![perf-flame-worker-8](./pics/worker8.svg) | ||
|
||
<a id='pps'/> | ||
|
||
# UDP PPS Tests | ||
|
||
In PPS tests, dperf clients keep a fixed `cps` of 3k and `keepalive` of 2ms, and adjust concurrent connections `cc` to generate different `pps` traffic. The same with CPS/CC tests, an extreme small payload of 1 byte is used. We use UDP protocol for the tests. Besides, `tx_burst` in dperf client is set to 1 to reduce traffic surge. | ||
|
||
### Dperf Client | ||
|
||
``` | ||
mode client | ||
cpu 8-15 | ||
slow_start 60 | ||
tx_burst 128 | ||
launch_num 1 | ||
payload_size 1 | ||
duration 90s | ||
protocol udp | ||
cps 3k | ||
cc [refer to performance data] | ||
keepalive 2ms | ||
port 0000:04:00.0 192.168.0.30 192.168.7.254 | ||
client 192.168.3.0 50 | ||
server 192.168.5.1 8 | ||
listen 80 1 | ||
``` | ||
### Dperf Server | ||
|
||
``` | ||
mode server | ||
cpu 0-7 | ||
tx_burst 128 | ||
payload_size 1 | ||
duration 100d | ||
protocol udp | ||
keepalive 10s | ||
port 0000:04:00.1 192.168.1.30 192.168.7.254 | ||
client 192.168.0.28 1 | ||
client 192.168.1.28 1 | ||
client 192.168.1.30 1 | ||
client 192.168.3.0 200 | ||
server 192.168.6.100 8 | ||
listen 80 1 | ||
``` | ||
|
||
### DPVS | ||
|
||
+ Service: 192.168.5.[1-8]:80, UDP, FullNAT, rr, uoa off | ||
+ Local IP: 192.168.3.[100-149] | ||
|
||
``` | ||
UDP 192.168.5.1:80 rr | ||
-> 192.168.6.100:80 FullNat 100 0 0 | ||
-> 192.168.6.101:80 FullNat 100 0 0 | ||
-> 192.168.6.102:80 FullNat 100 0 0 | ||
-> 192.168.6.103:80 FullNat 100 0 0 | ||
-> 192.168.6.104:80 FullNat 100 0 0 | ||
-> 192.168.6.105:80 FullNat 100 0 0 | ||
-> 192.168.6.106:80 FullNat 100 0 0 | ||
-> 192.168.6.107:80 FullNat 100 0 0 | ||
UDP 192.168.5.2:80 rr | ||
-> 192.168.6.100:80 FullNat 100 0 0 | ||
-> 192.168.6.101:80 FullNat 100 0 0 | ||
... | ||
... | ||
``` | ||
|
||
### Performance Data | ||
|
||
| workers | connections | ipackets/pps | opackets/pps | ibytes/Bps | obytes/Bps | dperf:pktRx | dperf:pktTx | dperf:bitsRx | dperf:bitsTx | dperf:dropTx | | ||
| ------- | ----------- | ------------ | ------------ | ----------- | ----------- | ----------- | ----------- | ------------- | ------------- | ------------ | | ||
| 1 | 2,900 | 2,900,244 | 2,900,221 | 174,014,668 | 174,013,684 | 1,449,993 | 1,450,000 | 695,996,816 | 498,800,000 | 0 | | ||
| 2 | 5,000 | 5,000,418 | 5,000,370 | 300,024,968 | 300,022,497 | 2,499,954 | 2,500,000 | 1,199,978,096 | 860,000,000 | 0 | | ||
| 4 | 9,200 | 9,201,066 | 9,201,048 | 552,063,906 | 552,062,986 | 4,486,101 | 4,600,001 | 2,153,329,128 | 1,582,400,344 | 0 | | ||
| 8 | 9,450 | 9,451,027 | 9,451,004 | 567,061,568 | 567,060,365 | 4,723,923 | 4,724,932 | 2,267,483,216 | 1,625,376,608 | 0 | | ||
|
||
![PPS](./pics/udp_pps.png) | ||
|
||
As shown above, DPVS v1.9.2 can reach the peak of PPS (i.e, about 9,000,000 PPS) with 4-workers in the tests. We may need a 25G/100G NIC for a higher PPS test. | ||
|
||
<a id='throughput'/> | ||
|
||
# Throughput Tests | ||
|
||
In throughput tests, dperf clients keep a fixed `cps` of 400 and `keepalive` of 1ms, and adjust concurrent connections `cc` to generate different `pps` traffic. The `payload_size` of both dperf server and dperf client are set to 800 bytes, and TCP protocol is used. | ||
|
||
### Dperf Client | ||
|
||
``` | ||
mode client | ||
cpu 8-15 | ||
slow_start 60 | ||
tx_burst 128 | ||
launch_num 10 | ||
payload_size 800 | ||
duration 90s | ||
protocol tcp | ||
cps 400 | ||
cc [refer to performance data] | ||
keepalive 1ms | ||
port 0000:04:00.0 192.168.0.30 192.168.7.254 | ||
client 192.168.3.0 50 | ||
server 192.168.5.1 8 | ||
listen 80 1 | ||
``` | ||
|
||
### Dperf Server | ||
|
||
``` | ||
mode server | ||
cpu 0-7 | ||
tx_burst 128 | ||
payload_size 800 | ||
duration 100d | ||
protocol tcp | ||
keepalive 10s | ||
port 0000:04:00.1 192.168.1.30 192.168.7.254 | ||
client 192.168.0.28 1 | ||
client 192.168.1.28 1 | ||
client 192.168.1.30 1 | ||
client 192.168.3.0 200 | ||
server 192.168.6.100 8 | ||
listen 80 1 | ||
``` | ||
|
||
## DPVS | ||
|
||
DPVS configurations are the same with the `TCP CPS/CC Tests`. | ||
|
||
|
||
### Performance Data | ||
|
||
| workers | connections | ipackets/pps | opackets/pps | ibytes/Bps | obytes/Bps | dperf:pktRx | dperf:pktTx | dperf:bitsRx | dperf:bitsTx | dperf:dropTx | | ||
| ------- | ----------- | ------------ | ------------ | ------------- | ------------- | ----------- | ----------- | ------------- | ------------- | ------------ | | ||
| 1 | 1,000 | 1,424,608 | 1,424,599 | 1,215,824,068 | 1,215,816,616 | 712,263 | 712,285 | 4,866,168,760 | 4,860,632,840 | 0 | | ||
| 2 | 1,000 | 1,424,748 | 1,424,738 | 1,215,947,746 | 1,215,939,706 | 712,247 | 712,263 | 4,866,065,328 | 4,860,482,712 | 0 | | ||
| 4 | 1,000 | 1,424,876 | 1,424,870 | 1,216,052,235 | 1,216,047,912 | 712,258 | 712,238 | 4,866,134,600 | 4,860,312,112 | 0 | | ||
| 8 | 1,000 | 1,424,788 | 1,424,787 | 1,215,971,428 | 1,215,970,249 | 712,261 | 712,260 | 4,866,160,976 | 4,860,462,240 | 0 | | ||
|
||
![Throughput](./pics/tcp_throughput.png) | ||
|
||
As shown above, DPVS v1.9.2 fills with ease the full bandwith of 10G NIC using only one worker. |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Oops, something went wrong.