Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add support for multipath (draft-05) #559

Open
wants to merge 84 commits into
base: kazuho/path-migration
Choose a base branch
from
Open
Changes from 1 commit
Commits
Show all changes
84 commits
Select commit Hold shift + click to select a range
5551bf5
retain pn_space, loss detection, CC as a pointer member of st_quicly_…
kazuho Jun 14, 2023
486188e
use `num_sent.packets` to drive key updates rather than PN
kazuho Jun 15, 2023
9c5eeab
update `path_index` when promoted
kazuho Jun 15, 2023
551aef3
[multipath] TP, ACK_MP, per-path nonce and ack space
kazuho Jun 15, 2023
815920a
retain `send_ack_at` per ACK space rather than per 4-tuple
kazuho Jun 20, 2023
30be9db
fix leak
kazuho Jun 20, 2023
c1b0c0c
[editorial] send_ack_at being smaller than INT64_MAX implies non-zero…
kazuho Jun 20, 2023
45132c2
ack application data immediately until the handshake context is disca…
kazuho Jun 20, 2023
0f4ddc5
update expected completion times following changes to the loss recovo…
kazuho Jun 20, 2023
3e3c7bb
this can be a feature (for the time being)
kazuho Jun 22, 2023
49f501b
Merge branch 'kazuho/path-migration' into kazuho/multipath
kazuho Jun 22, 2023
d088e4a
check CID requirements only if both sides opt-in to MPQUIC
kazuho Jun 22, 2023
dd29772
[cli] `--multipath` option to negotiate use of multipath
kazuho Jun 22, 2023
ef459d9
Merge branch 'kazuho/path-migration' into kazuho/multipath
kazuho Jun 26, 2023
494fe79
simplify
kazuho Jun 26, 2023
f9a9104
DCID passed to encryptor is always zero with v1
kazuho Jun 26, 2023
31ed6a1
zero-clear
kazuho Jun 26, 2023
b2e93da
UAF
kazuho Jun 26, 2023
121411c
check the correct cid_set
kazuho Jun 26, 2023
851f211
retain path_id for ack-ack handling
kazuho Jun 26, 2023
8320478
make sure the struct is of intended size
kazuho Jun 26, 2023
328748b
Merge branch 'kazuho/path-migration' into kazuho/multipath
kazuho Jun 27, 2023
d5ef3cb
[cli] test migration with multipath enabled
kazuho Jun 27, 2023
169a21a
[refactor] define iterator as a function rather than a macro
kazuho Jun 27, 2023
0edfebf
use the iterator
kazuho Jun 27, 2023
d94e6ec
reduce ifs through the use of vtable
kazuho Jun 27, 2023
346098d
simplify
kazuho Jun 27, 2023
59d582d
[cli] wip
kazuho Jun 27, 2023
8118692
refine comment
kazuho Jun 27, 2023
c7adaaa
add API to open new path
kazuho Jun 27, 2023
50a2304
Merge branch 'kazuho/path-migration' into kazuho/multipath
kazuho Jun 28, 2023
602700f
API to add path
kazuho Jun 29, 2023
83da103
immediate means now, not 1 second later
kazuho Jul 19, 2023
2bea1cb
emit path_index in log
kazuho Aug 16, 2023
048ebb5
fix assert
kazuho Aug 17, 2023
7fe0442
do not overwrite v6 address
kazuho Aug 17, 2023
0cd0b6b
doc-comment the requirement
kazuho Aug 21, 2023
52c7e49
log local address of a new path
kazuho Aug 21, 2023
c788aaa
[cli] when running as a client bind to local port so that paths can b…
kazuho Aug 21, 2023
0770543
distinguish local adddress using only the port number if the address …
kazuho Aug 21, 2023
c47e83b
add assert
kazuho Aug 21, 2023
6149d85
[cli] supply local address, otherwise the client is impossible to det…
kazuho Aug 21, 2023
2ee3e24
fix hot loop when trying to send PATH_RESPONSE but not PATH_CHALLENGE
kazuho Aug 21, 2023
2677218
split ack-queue, loss recovery, cc
kazuho Aug 21, 2023
3dc40e9
[cli] replace the local port number retained by quicly, this is the o…
kazuho Aug 21, 2023
624f3c6
fix typo
kazuho Aug 22, 2023
2c21108
log dcid_sequence_number
kazuho Aug 22, 2023
ceade3f
oops
kazuho Aug 22, 2023
19a24f7
mark inflight packets as lost when retiring a path without killing th…
kazuho Aug 23, 2023
f772bd4
oops, remove debug log
kazuho Aug 23, 2023
c18880f
draft-05
kazuho Aug 23, 2023
7e60832
remove paths that are source of rebinding in a way that works with mu…
kazuho Aug 24, 2023
8c022ba
When multipath is used, prune paths source of NAT rebinding at the mo…
kazuho Aug 24, 2023
88f6b65
skipping the per-path loss recovery / CC logic can cause a hot loop
kazuho Aug 24, 2023
cf8a788
unify the packet emission logic (now data is sent on all the paths)
kazuho Sep 19, 2023
4432722
implement codec for PATH_ABANDON and PATH_STATUS though they are not …
kazuho Sep 19, 2023
ed364ea
fix compile errors on linux
kazuho Sep 19, 2023
f0c7a75
fix compile error on platforms lacking dtrace support
kazuho Sep 20, 2023
ca3e797
remove unused probe from definition, otherwise h2olog cannot be built
kazuho Sep 20, 2023
5febfc6
promote quicly_build_multipath_iv to an inline function; doing so hel…
kazuho Sep 20, 2023
023e97c
for the purpose of reporting, consider any non-initial path as "promo…
kazuho Sep 26, 2023
11f698e
record receive of ECN, send ACK_ECN
kazuho Oct 12, 2023
da0e57b
[cli] recognize ECN
kazuho Oct 12, 2023
5018f20
enable ECN on the send side
kazuho Oct 12, 2023
fe1bc35
[cli] enable ECN sender side
kazuho Oct 12, 2023
37d540b
update tests
kazuho Oct 13, 2023
d522c24
adjust comment
kazuho Oct 13, 2023
5f280da
ECN accounting is reported per-epoch
kazuho Oct 13, 2023
86df55a
retain counts for ECT(0), ECT(1) too, report per-connection metrics
kazuho Oct 13, 2023
addba2d
fix length check
kazuho Oct 13, 2023
fd2063c
[cli] report ack-ecn numbers
kazuho Oct 13, 2023
6501ed8
oopses
kazuho Oct 13, 2023
4bff19f
count and report loss episodes due to ECN (i.e., ones did not involve…
kazuho Oct 13, 2023
089ce4f
record and report ECN stats of received packets too
kazuho Oct 16, 2023
1c35a90
fast conversion as suggested by https://twitter.com/kamedo2/status/17…
kazuho Oct 16, 2023
4186233
report number of paths that were ECN-(in)capable; `quicly_stats_t::nu…
kazuho Oct 17, 2023
9f9f068
Merge pull request #557 from h2o/kazuho/examples-echo-zero-timeout
kazuho Oct 17, 2023
c55d375
Merge pull request #551 from h2o/kazuho/send_ack_at-is-unreliable-tim…
kazuho Oct 17, 2023
c5b954b
Merge pull request #561 from h2o/kazuho/ecn
kazuho Oct 18, 2023
460c90b
Merge branch 'master' into kazuho/multipath-05
kazuho Oct 18, 2023
bfca5de
Fix entering hot loop if app opens a stream before the handshake is c…
kazuho Oct 19, 2023
aae8e57
Merge pull request #562 from h2o/kazuho/amend-551
kazuho Oct 19, 2023
33354cd
Merge branch 'master' into kazuho/multipath-05
kazuho Oct 19, 2023
e4a48bc
oops
kazuho Oct 19, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
retain send_ack_at per ACK space rather than per 4-tuple
  • Loading branch information
kazuho committed Jun 20, 2023
commit 815920a25ba281c9c0239490ab548efc56f12c1e
146 changes: 82 additions & 64 deletions lib/quicly.c
Original file line number Diff line number Diff line change
@@ -114,6 +114,10 @@ struct st_quicly_pn_space_t {
* acks to be sent to remote peer
*/
quicly_ranges_t ack_queue;
/**
* when to send an ACK
*/
int64_t send_ack_at;
/**
* time at when the largest pn in the ack_queue has been received (or INT64_MAX if none)
*/
@@ -203,10 +207,6 @@ struct st_quicly_path_egress_t {
*
*/
int64_t last_retransmittable_sent_at;
/**
* when to send an ACK, connection close frames or to destroy the connection
*/
int64_t send_ack_at;
/**
* congestion control
*/
@@ -336,6 +336,7 @@ struct st_quicly_conn_t {
uint16_t error_code;
uint64_t frame_type; /* UINT64_MAX if application close */
const char *reason_phrase;
uint64_t send_at; /* when to send CONNECTION_CLOSE or free the connection */
unsigned long num_packets_received;
} connection_close;
/**
@@ -811,10 +812,56 @@ uint64_t quicly_determine_packet_number(uint32_t truncated, size_t num_bits, uin
return candidate;
}

#define FOREACH_APP_SPACE(conn, spvar, seq_opt, block) \
do { \
quicly_conn_t *_conn = (conn); \
uint64_t *_seq_opt = (seq_opt); \
if (quicly_is_multipath(_conn)) { \
for (size_t _i = 0, _size = quicly_local_cid_get_size(&_conn->super.local.cid_set); _i < _size; ++_i) { \
quicly_local_cid_t *_cid = &_conn->super.local.cid_set.cids[_i]; \
if (_cid->multipath.space == NULL) \
continue; \
assert(_cid->state != QUICLY_LOCAL_CID_STATE_IDLE); \
(spvar) = _cid->multipath.space; \
if (_seq_opt != NULL) \
*_seq_opt = _cid->sequence; \
do { \
block; \
} while (0); \
} \
} else { \
(spvar) = _conn->application->non_multipath.space; \
if (_seq_opt != NULL) \
*_seq_opt = UINT64_MAX; \
do { \
block; \
} while (0); \
} \
} while (0)

static int64_t calc_min_send_ack_at(quicly_conn_t *conn)
{
int64_t at = INT64_MAX;

if (conn->initial != NULL && at > conn->initial->super.send_ack_at)
at = conn->initial->super.send_ack_at;
if (conn->handshake != NULL && at > conn->handshake->super.send_ack_at)
at = conn->handshake->super.send_ack_at;
if (conn->application != NULL && conn->application->one_rtt_writable) {
struct st_quicly_pn_space_t *space;
FOREACH_APP_SPACE(conn, space, NULL, {
if (at > space->send_ack_at)
at = space->send_ack_at;
});
}

return at;
}

static void assert_consistency(quicly_conn_t *conn, size_t path_index, int timer_must_be_in_future)
{
if (conn->super.state >= QUICLY_STATE_CLOSING) {
assert(!timer_must_be_in_future || conn->stash.now < conn->paths[path_index]->egress->send_ack_at);
assert(!timer_must_be_in_future || conn->stash.now < calc_min_send_ack_at(conn));
return;
}

@@ -1455,6 +1502,7 @@ static struct st_quicly_pn_space_t *alloc_pn_space(size_t sz, uint32_t packet_to
return NULL;

quicly_ranges_init(&space->ack_queue);
space->send_ack_at = INT64_MAX;
space->largest_pn_received_at = INT64_MAX;
space->next_expected_packet_number = 0;
space->unacked_count = 0;
@@ -1497,7 +1545,7 @@ static int record_pn(quicly_ranges_t *ranges, uint64_t pn, int *is_out_of_order)
return 0;
}

static int record_receipt(struct st_quicly_pn_space_t *space, uint64_t pn, int is_ack_only, int64_t now, int64_t *send_ack_at,
static int record_receipt(struct st_quicly_pn_space_t *space, uint64_t pn, int is_ack_only, int64_t now,
uint64_t *received_out_of_order)
{
int ret, ack_now, is_out_of_order;
@@ -1521,9 +1569,9 @@ static int record_receipt(struct st_quicly_pn_space_t *space, uint64_t pn, int i
}

if (ack_now) {
*send_ack_at = now;
} else if (*send_ack_at == INT64_MAX && space->unacked_count != 0) {
*send_ack_at = now + QUICLY_DELAYED_ACK_TIMEOUT;
space->send_ack_at = now;
} else if (space->send_ack_at == INT64_MAX && space->unacked_count != 0) {
space->send_ack_at = now + QUICLY_DELAYED_ACK_TIMEOUT;
}

ret = 0;
@@ -1773,7 +1821,6 @@ static int new_path(quicly_conn_t *conn, size_t path_index, struct sockaddr *rem
&conn->super.remote.transport_params.ack_delay_exponent);
path->egress->next_pn_to_skip =
calc_next_pn_to_skip(conn->super.ctx->tls, 0, initcwnd, conn->super.ctx->initial_egress_max_udp_payload_size);
path->egress->send_ack_at = INT64_MAX;
conn->super.ctx->init_cc->cb(conn->super.ctx->init_cc, &path->egress->cc, initcwnd, conn->stash.now);
quicly_ratemeter_init(&path->egress->ratemeter);
} else {
@@ -2471,6 +2518,7 @@ static quicly_conn_t *create_connection(quicly_context_t *ctx, uint32_t protocol
quicly_maxsender_init(&conn->ingress.max_streams.uni, conn->super.ctx->transport_params.max_streams_uni);
quicly_maxsender_init(&conn->ingress.max_streams.bidi, conn->super.ctx->transport_params.max_streams_bidi);
conn->egress.max_udp_payload_size = conn->super.ctx->initial_egress_max_udp_payload_size;
conn->egress.connection_close.send_at = INT64_MAX;
init_max_streams(&conn->egress.max_streams.uni);
init_max_streams(&conn->egress.max_streams.bidi);
conn->egress.ack_frequency.update_at = INT64_MAX;
@@ -3325,7 +3373,7 @@ static int is_point5rtt_with_no_handshake_data_to_send(quicly_conn_t *conn)
int64_t quicly_get_first_timeout(quicly_conn_t *conn)
{
if (conn->super.state >= QUICLY_STATE_CLOSING)
return conn->paths[0]->egress->send_ack_at;
return conn->egress.connection_close.send_at;

if (should_send_datagram_frame(conn))
return 0;
@@ -3359,8 +3407,9 @@ int64_t quicly_get_first_timeout(quicly_conn_t *conn)
if (amp_window > 0) {
if (path->egress->loss.alarm_at < at && !is_point5rtt_with_no_handshake_data_to_send(conn))
at = path->egress->loss.alarm_at;
if (path->egress->send_ack_at < at)
at = path->egress->send_ack_at;
int64_t send_ack_at = calc_min_send_ack_at(conn);
if (send_ack_at < at)
at = send_ack_at;
}
if (at > path->path_challenge.send_at)
at = path->path_challenge.send_at;
@@ -4660,9 +4709,11 @@ static int send_handshake_flow(quicly_conn_t *conn, size_t epoch, quicly_send_co
return 0;

/* send ACK */
if (space != NULL && (space->unacked_count != 0 || send_probe))
if (space != NULL && (space->unacked_count != 0 || send_probe)) {
if ((ret = send_ack(conn, UINT64_MAX, space, s)) != 0)
goto Exit;
space->send_ack_at = INT64_MAX;
}

if (!ack_only) {
/* send data */
@@ -4982,31 +5033,6 @@ static int send_other_control_frames(quicly_conn_t *conn, quicly_send_context_t
return 0;
}

static int has_pending_acks(quicly_conn_t *conn)
{
if (conn->initial != NULL && conn->initial->super.unacked_count != 0)
return 1;
if (conn->handshake != NULL && conn->handshake->super.unacked_count != 0)
return 1;
if (conn->application != NULL && conn->application->one_rtt_writable) {
if (quicly_is_multipath(conn)) {
for (size_t i = 0; i < quicly_local_cid_get_size(&conn->super.local.cid_set); ++i) {
struct st_quicly_pn_space_t *space;
if (conn->super.local.cid_set.cids[i].state == QUICLY_LOCAL_CID_STATE_IDLE)
continue;
if ((space = conn->super.local.cid_set.cids[i].multipath.space) == NULL)
continue;
if (space->unacked_count != 0)
return 1;
}
} else {
if (conn->application->non_multipath.space->unacked_count != 0)
return 1;
}
}
return 0;
}

static int do_send(quicly_conn_t *conn, quicly_send_context_t *s)
{
struct st_quicly_conn_path_t *path = conn->paths[s->path_index];
@@ -5112,22 +5138,16 @@ static int do_send(quicly_conn_t *conn, quicly_send_context_t *s)
/* non probing frames are sent only on path zero */
if (s->path_index == 0) {
/* acks (TODO send on the correct path rather than on path 0 */
if (conn->application->one_rtt_writable && path->egress->send_ack_at <= conn->stash.now) {
if (quicly_is_multipath(conn)) {
for (size_t i = 0; i < quicly_local_cid_get_size(&conn->super.local.cid_set); ++i) {
struct st_quicly_pn_space_t *space;
if (conn->super.local.cid_set.cids[i].state == QUICLY_LOCAL_CID_STATE_IDLE)
continue;
if ((space = conn->super.local.cid_set.cids[i].multipath.space) == NULL || space->unacked_count == 0)
continue;
if ((ret = send_ack(conn, conn->super.local.cid_set.cids[i].sequence, space, s)) != 0)
if (conn->application->one_rtt_writable) {
struct st_quicly_pn_space_t *space;
uint64_t cid;
FOREACH_APP_SPACE(conn, space, &cid, {
if (space->unacked_count != 0 && space->send_ack_at <= conn->stash.now) {
if ((ret = send_ack(conn, cid, space, s)) != 0)
goto Exit;
space->send_ack_at = INT64_MAX;
}
} else {
if (conn->application->non_multipath.space->unacked_count != 0)
if ((ret = send_ack(conn, UINT64_MAX, conn->application->non_multipath.space, s)) != 0)
goto Exit;
}
});
}
/* DATAGRAM frame. Notes regarding current implementation:
* * Not limited by CC, nor the bytes counted by CC.
@@ -5216,8 +5236,6 @@ static int do_send(quicly_conn_t *conn, quicly_send_context_t *s)
}
if (ret == 0) {
/* update timers, start / stop delivery rate estimator */
if (conn->application == NULL || !has_pending_acks(conn))
path->egress->send_ack_at = INT64_MAX; /* we have sent ACKs for every epoch (or before address validation) */
int can_send_stream_data = scheduler_can_send(conn);
update_send_alarm(conn, s->path_index, can_send_stream_data, 1);
if (can_send_stream_data &&
@@ -5305,7 +5323,7 @@ int quicly_send(quicly_conn_t *conn, quicly_address_t *dest, quicly_address_t *s
goto Exit;
}
}
if (conn->super.state == QUICLY_STATE_CLOSING && conn->paths[0]->egress->send_ack_at <= conn->stash.now) {
if (conn->super.state == QUICLY_STATE_CLOSING && conn->egress.connection_close.send_at <= conn->stash.now) {
/* destroy all streams; doing so is delayed until the emission of CONNECTION_CLOSE frame to allow quicly_close to be
* called from a stream handler */
destroy_all_streams(conn, 0, 0);
@@ -5319,9 +5337,9 @@ int quicly_send(quicly_conn_t *conn, quicly_address_t *dest, quicly_address_t *s
goto Exit;
}
/* wait at least 1ms */
if ((conn->paths[0]->egress->send_ack_at = quicly_sentmap_get(&iter)->sent_at + get_sentmap_expiration_time(conn, 0)) <=
if ((conn->egress.connection_close.send_at = quicly_sentmap_get(&iter)->sent_at + get_sentmap_expiration_time(conn, 0)) <=
conn->stash.now)
conn->paths[0]->egress->send_ack_at = conn->stash.now + 1;
conn->egress.connection_close.send_at = conn->stash.now + 1;
ret = 0;
goto Exit;
}
@@ -5468,10 +5486,10 @@ static int enter_close(quicly_conn_t *conn, int local_is_initiating, int wait_dr

if (local_is_initiating) {
conn->super.state = QUICLY_STATE_CLOSING;
conn->paths[0]->egress->send_ack_at = 0;
conn->egress.connection_close.send_at = 0;
} else {
conn->super.state = QUICLY_STATE_DRAINING;
conn->paths[0]->egress->send_ack_at = wait_draining ? conn->stash.now + get_sentmap_expiration_time(conn, 0) : 0;
conn->egress.connection_close.send_at = wait_draining ? conn->stash.now + get_sentmap_expiration_time(conn, 0) : 0;
}

setup_next_send(conn, 0);
@@ -6694,7 +6712,7 @@ int quicly_accept(quicly_conn_t **conn, quicly_context_t *ctx, struct sockaddr *
if ((ret = handle_payload(*conn, QUICLY_EPOCH_INITIAL, 0, &(*conn)->initial->super, payload.base, payload.len,
&offending_frame_type, &is_ack_only, &is_probe_only)) != 0)
goto Exit;
if ((ret = record_receipt(&(*conn)->initial->super, pn, 0, (*conn)->stash.now, &(*conn)->paths[0]->egress->send_ack_at,
if ((ret = record_receipt(&(*conn)->initial->super, pn, 0, (*conn)->stash.now,
&(*conn)->super.stats.num_packets.received_out_of_order)) != 0)
goto Exit;

@@ -6768,7 +6786,7 @@ int quicly_receive(quicly_conn_t *conn, struct sockaddr *dest_addr, struct socka
++conn->egress.connection_close.num_packets_received;
/* respond with a CONNECTION_CLOSE frame using exponential back-off */
if (__builtin_popcountl(conn->egress.connection_close.num_packets_received) == 1)
conn->paths[0]->egress->send_ack_at = 0;
conn->egress.connection_close.send_at = 0;
ret = 0;
goto Exit;
case QUICLY_STATE_DRAINING:
@@ -6971,8 +6989,8 @@ int quicly_receive(quicly_conn_t *conn, struct sockaddr *dest_addr, struct socka
QUICLY_LOG_CONN(elicit_path_migration, conn, { PTLS_LOG_ELEMENT_UNSIGNED(path_index, path_index); });
}
if (conn->super.state < QUICLY_STATE_CLOSING && space != NULL) {
if ((ret = record_receipt(space, pn, is_ack_only, conn->stash.now, &conn->paths[path_index]->egress->send_ack_at,
&conn->super.stats.num_packets.received_out_of_order)) != 0)
if ((ret = record_receipt(space, pn, is_ack_only, conn->stash.now, &conn->super.stats.num_packets.received_out_of_order)) !=
0)
goto Exit;
}

Loading