You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I forgot that we in the current http-backend require a Mutex for the http client, e.g.:
#[derive(Debug)]// TODO: once we have hyper as `rama_core` we can// drop this mutex as there is no inherint reason for `sender` to be mutable...pub(super)enumSendRequest<Body>{Http1(Mutex<hyper::client::conn::http1::SendRequest<Body>>),Http2(Mutex<hyper::client::conn::http2::SendRequest<Body>>),}
I bet that this is already a big explanation on why it is so slow. For sure there is other stuff that can be improved, haven't profiled yet. But let's first get the fork+embed work of hyper going and done, so that we can start from a benchmark without this mutex still in place. As that will then no longer be required.
To test the theory I ran against a rama-based http server. And yeah it works a lot faster... Still not as fast as I would hope, but this is better. We can circle back into this issue after hyper migration has happened.
Started doing some profiling. Seems not as much to do with the Mutex (which we no longer do for h2 but only for h1).
Seems that a lot of time is spend because we just use a connection for 1 request, this is costly as it means setting up the entire tls stuff...
Connection pooling is gonna have to be done in 0.3 for sure, and decently so. After that is done we can also see what we can improve around the TLS usage.
Current request throughput is pretty sloppy...
Some benchmarks that came in measure at <400req/sec... That's embarrassingly slow.
The text was updated successfully, but these errors were encountered: