Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[R&D] AF_XDP performance #860

Open
glazychev-art opened this issue Apr 28, 2023 · 3 comments
Open

[R&D] AF_XDP performance #860

glazychev-art opened this issue Apr 28, 2023 · 3 comments

Comments

@glazychev-art
Copy link
Contributor

Description

During the integration, we noticed several performance issues

Measurements on Kind

iperf3 TCP

Ethernet remote mechanism (VxLAN)

AF_PACKET:

Connecting to host 172.16.1.100, port 5201
[  5] local 172.16.1.101 port 43488 connected to 172.16.1.100 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  46.6 MBytes   391 Mbits/sec  174    969 KBytes       
[  5]   1.00-2.00   sec  48.8 MBytes   409 Mbits/sec    0   1.02 MBytes       
[  5]   2.00-3.00   sec  58.8 MBytes   493 Mbits/sec    0   1.07 MBytes       
[  5]   3.00-4.00   sec  53.8 MBytes   451 Mbits/sec    0   1.10 MBytes       
[  5]   4.00-5.00   sec  46.2 MBytes   388 Mbits/sec    0   1.12 MBytes       
[  5]   5.00-6.00   sec  62.5 MBytes   524 Mbits/sec    0   1.13 MBytes       
[  5]   6.00-7.00   sec  45.0 MBytes   377 Mbits/sec    0   1.14 MBytes       
[  5]   7.00-8.00   sec  65.0 MBytes   545 Mbits/sec    0   1.18 MBytes       
[  5]   8.00-9.00   sec  56.2 MBytes   472 Mbits/sec    0   1.22 MBytes       
[  5]   9.00-10.00  sec  45.0 MBytes   377 Mbits/sec    0   1.24 MBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec   528 MBytes   443 Mbits/sec  174             sender
[  5]   0.00-10.08  sec   526 MBytes   438 Mbits/sec                  receiver

AF_XDP:

Connecting to host 172.16.1.100, port 5201
[  5] local 172.16.1.101 port 36586 connected to 172.16.1.100 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  46.9 MBytes   393 Mbits/sec  1326    113 KBytes       
[  5]   1.00-2.00   sec  41.3 MBytes   346 Mbits/sec  1114   42.2 KBytes       
[  5]   2.00-3.00   sec  36.2 MBytes   304 Mbits/sec  1058   34.0 KBytes       
[  5]   3.00-4.00   sec  54.2 MBytes   455 Mbits/sec  1560   20.4 KBytes       
[  5]   4.00-5.00   sec  36.3 MBytes   304 Mbits/sec  1149   44.9 KBytes       
[  5]   5.00-6.00   sec  27.9 MBytes   234 Mbits/sec  953   20.4 KBytes       
[  5]   6.00-7.00   sec  37.9 MBytes   318 Mbits/sec  1106   25.9 KBytes       
[  5]   7.00-8.00   sec  33.1 MBytes   278 Mbits/sec  964   25.9 KBytes       
[  5]   8.00-9.00   sec  39.2 MBytes   329 Mbits/sec  1448   32.7 KBytes       
[  5]   9.00-10.00  sec  51.1 MBytes   429 Mbits/sec  1445   23.1 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec   404 MBytes   339 Mbits/sec  12123             sender
[  5]   0.00-10.00  sec   403 MBytes   338 Mbits/sec                  receiver

Note: many Retrs

IP remote mechanism (Wireguard)

AF_PACKET:

Connecting to host 172.16.1.100, port 5201
[  5] local 172.16.1.101 port 49978 connected to 172.16.1.100 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  88.3 MBytes   740 Mbits/sec    2    487 KBytes       
[  5]   1.00-2.00   sec  87.4 MBytes   733 Mbits/sec    0    606 KBytes       
[  5]   2.00-3.00   sec  76.5 MBytes   642 Mbits/sec    6    495 KBytes       
[  5]   3.00-4.00   sec  74.6 MBytes   626 Mbits/sec    0    596 KBytes       
[  5]   4.00-5.00   sec  42.3 MBytes   355 Mbits/sec    0    649 KBytes       
[  5]   5.00-6.00   sec  21.7 MBytes   182 Mbits/sec    8    473 KBytes       
[  5]   6.00-7.00   sec  36.9 MBytes   310 Mbits/sec    0    545 KBytes       
[  5]   7.00-8.00   sec  88.9 MBytes   746 Mbits/sec    0    636 KBytes       
[  5]   8.00-9.00   sec  82.4 MBytes   691 Mbits/sec    8    539 KBytes       
[  5]   9.00-10.00  sec  92.0 MBytes   772 Mbits/sec    0    664 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec   691 MBytes   580 Mbits/sec   24             sender
[  5]   0.00-10.03  sec   690 MBytes   577 Mbits/sec                  receiver

AF_XDP:

Connecting to host 172.16.1.100, port 5201
[  5] local 172.16.1.101 port 46608 connected to 172.16.1.100 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   104 MBytes   873 Mbits/sec   47    645 KBytes       
[  5]   1.00-2.00   sec  98.7 MBytes   828 Mbits/sec   39    538 KBytes       
[  5]   2.00-3.00   sec  90.9 MBytes   763 Mbits/sec    0    655 KBytes       
[  5]   3.00-4.00   sec  65.2 MBytes   547 Mbits/sec   14    533 KBytes       
[  5]   4.00-5.00   sec  53.3 MBytes   447 Mbits/sec    7    603 KBytes       
[  5]   5.00-6.00   sec  52.4 MBytes   440 Mbits/sec    0    660 KBytes       
[  5]   6.00-7.00   sec  39.1 MBytes   328 Mbits/sec    8    526 KBytes       
[  5]   7.00-8.00   sec  38.7 MBytes   325 Mbits/sec    0    587 KBytes       
[  5]   8.00-9.00   sec  94.8 MBytes   796 Mbits/sec    0    675 KBytes       
[  5]   9.00-10.00  sec  96.0 MBytes   805 Mbits/sec    7    618 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec   733 MBytes   615 Mbits/sec  122             sender
[  5]   0.00-10.05  sec   732 MBytes   611 Mbits/sec                  receiver

Conclusions

Average of 10 runs
Ethernet:
AF_PACKET is faster than AF_XDP by ~13% (460.3 Mbits/sec vs 407.2 Mbits/sec)
IP:
AF_XDP is equal to AF_PACKET (372,1 Mbits/sec vs 370,2 Mbits/sec)

Ideas why this is happening

  1. If we look at iperf3 logs, we will look a huge number of retransmissions:
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  46.9 MBytes   393 Mbits/sec  1326    113 KBytes       
[  5]   1.00-2.00   sec  41.3 MBytes   346 Mbits/sec  1114   42.2 KBytes
...

(we don't see them with AF_PACKET)
2. We was able to reproduce something similar on bare vpp instances:
https://lists.fd.io/g/vpp-dev/topic/af_xdp_performance/98105671
3. If we look at the vpp gerrit, we can see several open af_xdp patches:
https://gerrit.fd.io/r/c/vpp/+/37653
https://gerrit.fd.io/r/c/vpp/+/38135

@glazychev-art
Copy link
Contributor Author

What we need to check:

  1. Various ethtool options as well as af_xdp creation options - 4h
  2. All unmerged/WIP PRs for af_xdp plugin - https://gerrit.fd.io/r/q/af_xdp - 7h
  3. Find information, practically check whether veth works correctly with AF_XDP in native mode. Is the ZERO_COPY flag supported? - 4h
  4. Check interfaces of public clusters. Perhaps we are seeing poor performance due to underlying interface types - 4h
  5. Find the cause of many TCP retransmissions:
  • check if there are retransmissions without using vpp - see xdp-tutorial - 2h
  • dive deep into vpp plugin code to find the reason - af-xdp plugin - ?

@denis-tingaikin denis-tingaikin changed the title AF_XDP performance [RND] AF_XDP performance May 29, 2023
@denis-tingaikin denis-tingaikin changed the title [RND] AF_XDP performance [R&D] AF_XDP performance May 29, 2023
@glazychev-art
Copy link
Contributor Author

glazychev-art commented May 30, 2023

Current state:

  • Managed to reproduce and fix this on vpp instances. One of the causes of the problem is the enabled TSO (TCP Large Segment Offload) on the interfaces. (ethtool -K {interface_name} tx off)
  • A few unmerged PRs from here - https://gerrit.fd.io/r/q/af_xdp help increase the performance. AF_XDP is faster than AF_PACKET by about 10% (iperf3 was used)
  • Perhaps there is a problem with UDP transmission - iperf3 server may not stop, but constantly wait for new packets from the client:
...
[  5]  10.00-11.00  sec  1.40 MBytes  11.7 Mbits/sec  5.516 ms  936/1368 (68%)  
[  5]  11.00-12.00  sec   159 KBytes  1.30 Mbits/sec  7.473 ms  0/48 (0%)  
[  5]  12.00-13.00  sec  0.00 Bytes  0.00 bits/sec  7.473 ms  0/0 (0%)  
[  5]  13.00-14.00  sec  0.00 Bytes  0.00 bits/sec  7.473 ms  0/0 (0%)  
[  5]  14.00-15.00  sec  0.00 Bytes  0.00 bits/sec  7.473 ms  0/0 (0%)  
...

Next steps:

  1. Check the fix on kind-cluster with NSM. If it does not help:
    • look for the cause using dumps, wireshark - 8h
    • modify xdp-program - 2h
    • vpp fix - 4h
  2. Check the fix on public cluster:
    • GKE - 2h
    • AKS - 2h
    • AWS (found wireguard problem) - 4h
  3. Measure the performance with memif clients
  4. Check UDP behavior

@glazychev-art
Copy link
Contributor Author

glazychev-art commented Jun 6, 2023

Current state:

  1. This patch fixes UDP problem (server waited a long time for the remaining packets) - https://gerrit.fd.io/r/c/vpp/+/34668
  2. This patch fixes TCP problem (huge number of TCP retransmissions) - https://gerrit.fd.io/r/c/vpp/+/38963
  3. A fix for bpf program loading - https://gerrit.fd.io/r/c/vpp/+/38971
  4. Performance measurement - https://docs.google.com/spreadsheets/d/1cTE5eSEXabFHUf_IYpxoXlqeldwpCKN7h6aBmEzicx8 or https://docs.google.com/spreadsheets/d/1ZmVTcBnjZ_RHg7NN2BIfLgjb5dPXsVlc3Jt5U4Zdk9o/edit?usp=sharing

In short - a significant increase in performance was achieved only locally on the kind cluster - AF_XDP is faster than AF_PACKET by about 30%
On public clusters - generally AF_XDP is faster by about 10%, but not always

Based on this, AF_XDP should be at least about 2.5 times faster - https://builders.intel.com/docs/networkbuilders/af-xdp-sockets-high-performance-networking-for-cloud-native-networking-technology-guide.pdf

Possible reasons:

  1. VPP plugin
  2. Much depends on the underlying driver. For example, when using veths, as stated, the packets will go through the stack anyway and we will not see the benefits.

Should we continue our performance research?

nsmbot pushed a commit that referenced this issue Oct 16, 2024
…k-vpp@main

PR link: networkservicemesh/sdk-vpp#860

Commit: 8a4f5ce
Author: Network Service Mesh Bot
Date: 2024-10-16 06:19:17 -0500
Message:
  - Update go.mod and go.sum to latest version from networkservicemesh/sdk-kernel@main (#860)
PR link: networkservicemesh/sdk-kernel#689
Commit: 40426fd
Author: Network Service Mesh Bot
Date: 2024-10-16 06:15:47 -0500
Message:
    - Update go.mod and go.sum to latest version from networkservicemesh/sdk@main (#689)
PR link: networkservicemesh/sdk#1679
Commit: 3801206
Author: Vladislav Byrgazov
Date: 2024-10-16 16:13:26 +0500
Message:
        - Updated strict IPAM and added dualstack IP pool (#1679)
* add fix for ipam
* another fix
* add a unit test for ipam issue
* add fix for ipam
* another fix
* add ip context validation
* properly delete addresses
* rework ip context validation
* temporarily skip failing tests
* fix CI issues
* fix all tests
* fix unit tests
* fix go linter issues
* cleanup
* add ipv6 unit test
* cleanup
* fix go linter issues
* Replaced strict ipam by filteripam implementation
* Added dualstack ippool, updated tests
* Fixed dualstack ippool
* Fix linter errors
* Fixed ippool test input data format
---------
Signed-off-by: NikitaSkrynnik <[email protected]>
Signed-off-by: Vladislav Byrgazov <[email protected]>
Signed-off-by: NSMBot <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: No status
Development

No branches or pull requests

1 participant