-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Do CC reaction before largest_acked
#2117
base: main
Are you sure you want to change the base?
Conversation
Packets are only declared as lost relative to `largest_acked`. If we hit a PTO while we don't have a largest_acked yet, also do a congestion control reaction (because otherwise none would happen). Broken out of mozilla#1998
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2117 +/- ##
=======================================
Coverage 95.38% 95.38%
=======================================
Files 112 112
Lines 36593 36615 +22
=======================================
+ Hits 34903 34924 +21
- Misses 1690 1691 +1 ☔ View full report in Codecov by Sentry. |
Failed Interop TestsQUIC Interop Runner, client vs. server neqo-latest as client
neqo-latest as server
All resultsSucceeded Interop TestsQUIC Interop Runner, client vs. server neqo-latest as client
neqo-latest as server
Unsupported Interop TestsQUIC Interop Runner, client vs. server neqo-latest as client
neqo-latest as server
|
Benchmark resultsPerformance differences relative to 5fe0c89. coalesce_acked_from_zero 1+1 entries: No change in performance detected.time: [99.163 ns 99.503 ns 99.896 ns] change: [-0.3454% +0.0430% +0.4756%] (p = 0.84 > 0.05) coalesce_acked_from_zero 3+1 entries: Change within noise threshold.time: [117.97 ns 118.32 ns 118.70 ns] change: [+0.0240% +0.5735% +1.1118%] (p = 0.03 < 0.05) coalesce_acked_from_zero 10+1 entries: No change in performance detected.time: [117.62 ns 118.06 ns 118.59 ns] change: [-2.4932% -0.4443% +0.9141%] (p = 0.72 > 0.05) coalesce_acked_from_zero 1000+1 entries: No change in performance detected.time: [98.536 ns 98.680 ns 98.843 ns] change: [-19.862% -6.7169% +2.0329%] (p = 0.64 > 0.05) RxStreamOrderer::inbound_frame(): Change within noise threshold.time: [112.78 ms 112.83 ms 112.88 ms] change: [+0.7474% +0.8121% +0.8822%] (p = 0.00 < 0.05) SentPackets::take_ranges: No change in performance detected.time: [5.6737 µs 5.8581 µs 6.0504 µs] change: [-13.447% -1.7477% +6.7591%] (p = 0.83 > 0.05) transfer/pacing-false/varying-seeds: No change in performance detected.time: [26.591 ms 27.730 ms 28.900 ms] change: [-9.0860% -3.9509% +1.7664%] (p = 0.17 > 0.05) transfer/pacing-true/varying-seeds: No change in performance detected.time: [33.638 ms 35.457 ms 37.296 ms] change: [-4.2984% +2.9796% +9.9402%] (p = 0.41 > 0.05) transfer/pacing-false/same-seed: No change in performance detected.time: [25.394 ms 26.144 ms 26.908 ms] change: [-4.9375% -0.6823% +3.7173%] (p = 0.74 > 0.05) transfer/pacing-true/same-seed: No change in performance detected.time: [39.502 ms 41.113 ms 42.776 ms] change: [-8.6814% -2.8935% +3.5444%] (p = 0.37 > 0.05) 1-conn/1-100mb-resp/mtu-1504 (aka. Download)/client: No change in performance detected.time: [925.84 ms 936.72 ms 947.74 ms] thrpt: [105.51 MiB/s 106.75 MiB/s 108.01 MiB/s] change: time: [-0.3743% +1.1445% +2.5735%] (p = 0.15 > 0.05) thrpt: [-2.5090% -1.1316% +0.3757%] 1-conn/10_000-parallel-1b-resp/mtu-1504 (aka. RPS)/client: No change in performance detected.time: [318.75 ms 321.71 ms 324.71 ms] thrpt: [30.797 Kelem/s 31.084 Kelem/s 31.372 Kelem/s] change: time: [-1.9371% -0.6178% +0.7130%] (p = 0.36 > 0.05) thrpt: [-0.7079% +0.6216% +1.9754%] 1-conn/1-1b-resp/mtu-1504 (aka. HPS)/client: No change in performance detected.time: [33.846 ms 34.024 ms 34.216 ms] thrpt: [29.226 elem/s 29.391 elem/s 29.546 elem/s] change: time: [-0.1725% +0.7496% +1.6595%] (p = 0.11 > 0.05) thrpt: [-1.6324% -0.7440% +0.1728%] 1-conn/1-100mb-resp/mtu-1504 (aka. Upload)/client: Change within noise threshold.time: [1.6835 s 1.7070 s 1.7314 s] thrpt: [57.758 MiB/s 58.582 MiB/s 59.401 MiB/s] change: time: [+0.6548% +2.5513% +4.4730%] (p = 0.01 < 0.05) thrpt: [-4.2815% -2.4878% -0.6505%] Client/server transfer resultsTransfer of 33554432 bytes over loopback.
|
@martinthomson I'd appreciate a review, since the code I am touching is pretty complex. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The structure looks fine, but this seems overly conservative to me.
@@ -337,6 +337,41 @@ impl<T: WindowAdjustment> CongestionControl for ClassicCongestionControl<T> { | |||
congestion || persistent_congestion | |||
} | |||
|
|||
/// Handle a congestion event. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this just a move? It looks like it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, but from ClassicCongestionControl
to CongestionControl
.
// Packets are only declared as lost relative to `largest_acked`. If we hit a PTO while | ||
// we don't have a largest_acked yet, also do a congestion control reaction (because | ||
// otherwise none would happen). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we sure that we want to slow down? If we haven't received an ACK, we should be at a low send rate. Do we really want to slow down further?
One thing that concerns me here is that a fast PTO (if we ever enable that) will hit this condition more often and that might not be good for performance. The same goes for long RTT connections, which might be OK while we are still retransmitting within the RTT, but once we get an RTT estimate that is long, we'll slow our send rate by a huge amount for the false PTOs we've hit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the argument is that the combination of initial cwnd and initial RTT (= PTO) were made with some sort of safety criteria in mind. A PTO is an indication that either the RTT is much longer than the default assumption, or there is quite a bit of loss. In either of these two cases, we probably want to slow down?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess my main concern is that if we want to be more aggressive about PTO, such as sending another Initial we create a situation where slowing down is not the result of a misunderstanding of the RTT, but a deliberate choice to send more to start with.
Signed-off-by: Lars Eggert <[email protected]>
Packets are only declared as lost relative to
largest_acked
. If we hit a PTO while we don't have a largest_acked yet, also do a congestion control reaction (because otherwise none would happen).Broken out of #1998