Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Low End-to-End Bitrate with iperf3 Despite High Downlink Throughput CU/DU split option 2 deployment #977

Open
Moussa-Guemdani opened this issue Dec 10, 2024 · 3 comments

Comments

@Moussa-Guemdani
Copy link

Moussa-Guemdani commented Dec 10, 2024

Issue Description

When measuring the end-to-end bitrate using iperf3, the reported value is significantly lower than expected. The downlink throughput reaches 564 Mbps, but the end-to-end bitrate fluctuates around 130 Mbps.

Setup Details

  • SDR: USRP X310
  • Channel Bandwidth: 80 MHz
  • Sampling Rate (srate): 92.16 Msps
  • Modulation: QAM256
  • MIMO Layers: 2

Core Network Server (5GC)

  • Architecture: 64-bit
  • Memory: 16 GiB
  • CPU: Intel® Core™ i7-8700 @ 3.20 GHz (6 Cores)
  • NIC: 10-Gigabit SFI/SFP+ (between core and CU)

CU/DU Server

  • Architecture: 64-bit
  • Memory: 256 GiB
  • CPU: AMD Ryzen Threadripper PRO 7955WX (16 Cores)
  • NIC: 10-Gigabit SFI/SFP+ (between DU and USRP)

Expected Behavior

The end-to-end bitrate should align closely with the downlink throughput of 564 Mbps.

Steps to Reproduce

  1. Measure downlink throughput using iperf3.
  2. Observe the end-to-end bitrate fluctuating around 130 Mbps.

Additional Information

  • Interfaces and CPU utilization have been checked, and no bottlenecks were identified.
  • TCP window sizes have been increased:
    sudo sysctl -w net.ipv4.tcp_rmem="4096 12582912 33554432"
    sudo sysctl -w net.ipv4.tcp_wmem="4096 12582912 33554432"

If any further details are needed, please let me know!

@Moussa-Guemdani
Copy link
Author

image

I noticed that the F1 interface is the bottleneck in the system. Does anyone have any insights or explanations on why this might be happening?

Any help or suggestions would be appreciated!

@robertfalkenberg
Copy link
Contributor

To start troubleshoot this issue you could start with UDP iperf3 -u test and gradually increase the bitrate -b 100M, -b 200M,... until you notice an increased loss rate at the iperf3 receiver.

Are you able to achieve higher UDP E2E throughput on the air interface with little "nok" rate above 200...230M ?

@Moussa-Guemdani
Copy link
Author

Moussa-Guemdani commented Jan 5, 2025

Thank you for your assistance!
Here are some of the results I’ve observed so far: (I have changed the config, I amusing 1 channel instead of mimo)

image

However, I occasionally experience a drop in air interface throughput, which seems to correlate with a drop in CQI:
image

Do you have any insights on what might be causing this behavior?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants