-
Notifications
You must be signed in to change notification settings - Fork 1
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The interesting thing is that we spawn handle_stream
for each connection and on shutdown only wait for the server task to finish, not for all the handle-stream tasks to finish. So the Prover::shutdown
method will return before all clients are done and we have no way of knowing when the clients are done.
Could we introduce some synchronisation between all the spawned tasks and the server so we can allow waiting till all is done?
src/main.rs
Outdated
provider.join().await?; | ||
|
||
let (s, mut r) = mpsc::channel(1); | ||
ctrlc::set_handler(move || s.try_send(()).expect("failed to send shutdown signal"))?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC the ctrlc crate creates a new thread for the signal handler. Tokio has tokio::signal::ctrl_c
which I think avoids this. Could be worth using that instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ohh this is nice, I didn't know about it
connection_id: connection.id(), | ||
}); | ||
loop { | ||
tokio::select! { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's almost always desirable to use the biased;
argument to tokio::select!
: it avoids the randdom number generator overhead on every loop and makes it deterministic to reason about the order the branches are executed. I don't think it makes a serious difference here, but I would used biased and put the shutdown branch first.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, makes sense
52bbf3b
to
7e94c9f
Compare
@flub added a version that now fully takes care of the spawned tasks. Not the cleanest, but it should do the trick. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have to admit I'm having a hard time wrapping my head around the tokio-context stuff. It took me forever to review this and I'm still not really sure I understood things correctly. I think I can see how it's useful to tell the connection to stop accepting new streams though. But I don't understand why it builds this hierarchy.
|
||
loop { | ||
tokio::select! { | ||
_ = current_ctx.done() => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this a new child context rather than calling ctx.done()
directly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
because RefContext
s done
isn't Send
making it not work here :/
} | ||
debug!("disconnected"); | ||
}); | ||
let (current_ctx, _handle) = RefContext::with_parent(&ctx, None); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
doesn't handle get dropped at the next loop iteration? And doesn't dropping the handle cancel the context?
I'm still trying to wrap my head around this context thing...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe..
events: broadcast::Sender<Event>, | ||
) { | ||
debug!("connection accepted from {:?}", connection.remote_addr()); | ||
let (mut current_ctx, _handle) = Context::with_parent(&ctx, None); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm also unsure why this needs a new child context
_ = current_ctx.done() => { | ||
return; | ||
} | ||
res = handle_stream(db, token, stream, events) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is still not hooked up to anything, is it not this that needs to signal to the Provider
instance it originates from when it is done? Otherwise we're still interrupting mid-transfer I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently this will be aborted in the middle of the transfer, yes, which I thought was the goal
Lets not merge this yet, I am not happy with the context setup atm |
This stops accepting new connections, but does not currently abort transfers. Closes #77
2832c99
to
e7be7ec
Compare
This stops accepting new connections, but does not currently abort transfers.
Closes n0-computer/iroh#736