Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Mark all packets TX'ed before PTO as lost #2129

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

larseggert
Copy link
Collaborator

We'd previously only mark 1 one or two packets as lost when a PTO fired. That meant that we potentially didn't RTX all data that we could have (i.e., that was in lost packets that we didn't mark lost).

This also changes the probing code to suppress redundant keep-alives, i.e., PINGs that we sent for other reasons, which could double as keep-alives but did not.

Broken out of #1998

We'd previously only mark 1 one or two packets as lost when a PTO fired.
That meant that we potentially didn't RTX all data that we could have
(i.e., that was in lost packets that we didn't mark lost).

This also changes the probing code to suppress redundant keep-alives,
i.e., PINGs that we sent for other reasons, which could double as
keep-alives but did not.

Broken out of mozilla#1998
Copy link

github-actions bot commented Sep 19, 2024

Failed Interop Tests

QUIC Interop Runner, client vs. server

neqo-latest as client

neqo-latest as server

All results

Succeeded Interop Tests

QUIC Interop Runner, client vs. server

neqo-latest as client

neqo-latest as server

Unsupported Interop Tests

QUIC Interop Runner, client vs. server

neqo-latest as client

neqo-latest as server

Copy link

codecov bot commented Sep 19, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 95.35%. Comparing base (55e3a93) to head (69bb7f8).

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #2129   +/-   ##
=======================================
  Coverage   95.35%   95.35%           
=======================================
  Files         112      112           
  Lines       36336    36332    -4     
=======================================
- Hits        34648    34646    -2     
+ Misses       1688     1686    -2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link

github-actions bot commented Sep 19, 2024

Benchmark results

Performance differences relative to 55e3a93.

coalesce_acked_from_zero 1+1 entries: 💚 Performance has improved.
       time:   [99.198 ns 99.479 ns 99.765 ns]
       change: [-12.409% -12.000% -11.589%] (p = 0.00 < 0.05)

Found 12 outliers among 100 measurements (12.00%)
8 (8.00%) high mild
4 (4.00%) high severe

coalesce_acked_from_zero 3+1 entries: 💚 Performance has improved.
       time:   [117.52 ns 117.86 ns 118.22 ns]
       change: [-33.120% -32.712% -32.230%] (p = 0.00 < 0.05)

Found 18 outliers among 100 measurements (18.00%)
3 (3.00%) low mild
3 (3.00%) high mild
12 (12.00%) high severe

coalesce_acked_from_zero 10+1 entries: 💚 Performance has improved.
       time:   [116.99 ns 117.71 ns 118.92 ns]
       change: [-39.495% -35.200% -32.567%] (p = 0.00 < 0.05)

Found 13 outliers among 100 measurements (13.00%)
4 (4.00%) low severe
4 (4.00%) low mild
1 (1.00%) high mild
4 (4.00%) high severe

coalesce_acked_from_zero 1000+1 entries: 💚 Performance has improved.
       time:   [98.619 ns 98.768 ns 98.937 ns]
       change: [-31.113% -30.475% -29.761%] (p = 0.00 < 0.05)

Found 9 outliers among 100 measurements (9.00%)
4 (4.00%) high mild
5 (5.00%) high severe

RxStreamOrderer::inbound_frame(): No change in performance detected.
       time:   [111.35 ms 111.48 ms 111.71 ms]
       change: [-0.0512% +0.0846% +0.3009%] (p = 0.44 > 0.05)

Found 15 outliers among 100 measurements (15.00%)
2 (2.00%) low severe
4 (4.00%) low mild
3 (3.00%) high mild
6 (6.00%) high severe

transfer/pacing-false/varying-seeds: Change within noise threshold.
       time:   [25.833 ms 26.894 ms 27.980 ms]
       change: [-10.698% -5.9022% -0.4362%] (p = 0.03 < 0.05)
transfer/pacing-true/varying-seeds: No change in performance detected.
       time:   [35.031 ms 36.861 ms 38.698 ms]
       change: [-10.008% -3.7693% +2.1590%] (p = 0.26 > 0.05)
transfer/pacing-false/same-seed: No change in performance detected.
       time:   [25.469 ms 26.355 ms 27.219 ms]
       change: [-7.7928% -3.6344% +0.9172%] (p = 0.11 > 0.05)

Found 2 outliers among 100 measurements (2.00%)
1 (1.00%) low mild
1 (1.00%) high mild

transfer/pacing-true/same-seed: No change in performance detected.
       time:   [39.520 ms 41.484 ms 43.481 ms]
       change: [-11.328% -5.1375% +1.1522%] (p = 0.11 > 0.05)

Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high mild

1-conn/1-100mb-resp (aka. Download)/client: No change in performance detected.
       time:   [112.63 ms 115.70 ms 121.44 ms]
       thrpt:  [823.44 MiB/s 864.28 MiB/s 887.87 MiB/s]
change:
       time:   [-2.9182% -0.1897% +6.7346%] (p = 0.92 > 0.05)
       thrpt:  [-6.3097% +0.1901% +3.0059%]

Found 5 outliers among 100 measurements (5.00%)
2 (2.00%) low severe
2 (2.00%) low mild
1 (1.00%) high severe

1-conn/10_000-parallel-1b-resp (aka. RPS)/client: No change in performance detected.
       time:   [311.80 ms 315.21 ms 318.54 ms]
       thrpt:  [31.393 Kelem/s 31.725 Kelem/s 32.072 Kelem/s]
change:
       time:   [-3.1737% -1.6055% +0.0018%] (p = 0.05 > 0.05)
       thrpt:  [-0.0018% +1.6317% +3.2777%]

Found 2 outliers among 100 measurements (2.00%)
2 (2.00%) low mild

1-conn/1-1b-resp (aka. HPS)/client: Change within noise threshold.
       time:   [33.981 ms 34.150 ms 34.336 ms]
       thrpt:  [29.124  elem/s 29.283  elem/s 29.428  elem/s]
change:
       time:   [+0.4621% +1.2135% +1.9498%] (p = 0.00 < 0.05)
       thrpt:  [-1.9125% -1.1989% -0.4600%]

Found 4 outliers among 100 measurements (4.00%)
3 (3.00%) high mild
1 (1.00%) high severe

Client/server transfer results

Transfer of 33554432 bytes over loopback.

Client Server CC Pacing Mean [ms] Min [ms] Max [ms] Relative
msquic msquic 212.7 ± 88.6 105.9 331.3 1.00
neqo msquic reno on 219.7 ± 11.4 210.3 244.1 1.00
neqo msquic reno 230.2 ± 11.0 209.1 244.1 1.00
neqo msquic cubic on 220.8 ± 13.1 206.4 246.4 1.00
neqo msquic cubic 246.4 ± 56.2 208.5 411.6 1.00
msquic neqo reno on 143.3 ± 78.5 83.6 339.4 1.00
msquic neqo reno 156.8 ± 106.0 81.8 395.5 1.00
msquic neqo cubic on 106.5 ± 26.1 84.5 193.3 1.00
msquic neqo cubic 103.7 ± 20.2 82.3 161.9 1.00
neqo neqo reno on 194.4 ± 61.9 149.7 358.5 1.00
neqo neqo reno 198.5 ± 108.1 124.7 497.2 1.00
neqo neqo cubic on 199.5 ± 86.0 125.5 440.5 1.00
neqo neqo cubic 214.1 ± 111.4 125.0 448.5 1.00

⬇️ Download logs

Copy link

Firefox builds for this PR

The following builds are available for testing. Crossed-out builds did not succeed.

@larseggert
Copy link
Collaborator Author

@martinthomson I'd appreciate a review, since the code I am touching is pretty complex.

Copy link
Collaborator

@mxinden mxinden left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This makes sense to me. Thanks for extracting it into a smaller pull request.

I am in favor of waiting for Martin's review.

Copy link
Member

@martinthomson martinthomson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we not have tests for this? Should we?

.pto_packets(PtoState::pto_packet_count(*pn_space))
.cloned(),
);
lost.extend(space.pto_packets().cloned());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we still need pto_packet_count if this is the decision?

The other question I have is whether this is necessary. We're cloning all of the information so that we can process the loss, which means more work on a PTO. Maybe PTO is rare enough that this doesn't matter, but one of the reasons for the limit on number was to avoid the extra work.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we still need pto_packet_count if this is the decision?

We do still need it to limit the number of packets we send on PTO.

The other question I have is whether this is necessary. We're cloning all of the information so that we can process the loss, which means more work on a PTO. Maybe PTO is rare enough that this doesn't matter, but one of the reasons for the limit on number was to avoid the extra work.

I've been wondering if it would be sufficient to mark n packets per space as lost, instead of all.

@larseggert
Copy link
Collaborator Author

Do we not have tests for this? Should we?

There are tests in #2128, but this PR alone doesn't make them succeed yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants