-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Weird unidi streams behavior #209
Comments
The test process is both connecion owner (who created the connection) and the stream owner of first stream (who called for 2nd and 3rd stream since there is no acceptor(quicer:async_accept_stream is called only once), quicer just pick the owner of the connecion as the 'fallback' owner with new_stream msg (flag: is_orphan => true)
since 2nd, and 3rd streams is_orphan=true, the active mode is set to false means receiving from message box is suspended.
This could be timing issues. new streams come before you called async_accpet_stream. |
I think that's what I'm seeing because when I try to call async_accept_stream for the second stream the new_stream message is already in the mailbox. Right now Cowboy opens the 3 streams immediately when connecting so if they're automatically picked up by something in quicer I don't have enough time to accept them properly. |
The behavior I would expect is for quicer to not accept the stream at all instead of falling back and just wait for me to call async_accept_stream again. |
Quicer has to accept three streams when By design, in the fallback case, if the process gets a new_stream message with is_orphan => true, it could |
Thanks for the explanations. So my confusion was around the accept_stream function. If there has to be an accept_stream function, I would expect it to not reject the stream, buffer the information it receives for that stream (new_stream and data), and not increase the flow control for that stream (limiting the amount of data that has to be queued). Then when the application does accept the stream all the messages can be sent directly. Otherwise, don't have/don't use the accept_stream function and let the connection process handle incoming streams. I think I will change the code to do that for now. I suppose it could be nice in some cases to be able to have multiple processes accept streams and deal with them but if the connection owner also receives new streams (and spawn/handoff) that kind of defeats the purpose of having separate processes doing that. On that note I have not seen a way to configure stream flow control. I only see connection flow control. Perhaps I missed something. For HTTP/3 I am required by the spec to allow at least 1024 bytes for these unidi streams. |
thanks for the feedback. I will make a feature switch to diable fallback and queuing streams in the connecion context and lets try that approch.
for stream flow control, look for |
Oh forget to say, another reason for prefer fallback feature is that there will be a case that you have 3 process spawned for accepting new streams but peer only create one. In this case, two process are just idling there. Image you have many connections, we could not just ignore the fact that is 2x numOfConnecions processes are idling So the ondemad (create new process when new_stream msg is received) approach wins in this case. |
Makes sense to do it like this when it's possible, although having a separate process per stream is not always a good idea when you have a lot of connections. For HTTP it only makes sense to create a new process after the request has been validated. Immediately creating a new process is an immediate cost that we can delay until after parsing the request. On top of that, QPACK would make things more difficult because the parsing of the request headers can be blocked until we receive the appropriate data on the QPACK unidi streams, and the encoding of the response may change depending on QPACK unidi as well, so using a separate process immediately would require exchanging a lot of Erlang messages to account for peer QPACK state changes. Similarly, the streams may update the local QPACK state. It's simply much easier to keep the QPACK states within a single process (the connection process) and let that process deal with decoding/encoding the headers. Using a separate process for unidi would make little sense as well, for HTTP/3 only the bidi streams do any real work. As a result I only have one process handling everything QUIC related. One process per connection with all streams handled by that same process. Note that in my comments above I'm not saying it has to be a separate process that calls accept_stream, just that if there is an accept_stream function I wouldn't expect to receive the new_stream message anywhere that hasn't called the function. I am fine with either: the connection process receives all new_stream events and we don't need an accept_stream; or, the callers of accept_stream receive one new_stream per call at most (the connection process would have to call it for each stream it wants to accept, it doesn't have to be a separate process). The issue with the fallback is that if I create a separate process that calls accept_stream, but it doesn't call accept_stream quick enough, the connection process will receive the new_stream. It is this race condition that I think quicer shouldn't allow. That said, I think the connection process receiving everything makes more sense and that's what I'll be doing now. |
thanks for calify it! I agree. async accept stream is confusion and it should not be required to call since quicer send new_stream msg to the connecion owner anyway when there is no available acceptors. |
For HTTP/3 I am expected to receive from at least 3 unidi streams.
I wrote a CT test that looks like this:
This prints the following:
First thing I notice is that the first ct:pal somehow doesn't print all the messages, I'm not sure why that is. But not important for this issue. If I do a receive I receive all these messages in the order they are printed with no issue.
The first problem is that the single
async_accept_stream
call resulted in 3 unidi streams getting accepted. I did not expect that at all.The second problem is that I am not receiving the data from the two other unidi streams, even if I put more
async_accept_stream
calls.The text was updated successfully, but these errors were encountered: