Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to constrain client connections? #15

Open
eprothro opened this issue Jul 29, 2013 · 2 comments
Open

How to constrain client connections? #15

eprothro opened this issue Jul 29, 2013 · 2 comments

Comments

@eprothro
Copy link

I seriously hesitate to post this here, but figure others may have the same question, and it is so specific to this particular implementation on Heroku... Please just let me know if there's a better place to have this answered.

I agree with others, most will be using this with Unicorn. Those experienced with configuring that server have probably discovered and tuned the backlog parameter.

Example:

pipe = if (ENV['RACK_ENV'] == 'production' || ENV['RAILS_ENV'] == 'production')
  puts '=> Unicorn listening to NGINX socket for requests'
  '/tmp/nginx.socket'
else
  puts '=> Unicorn listening to TCP port for requests'
  ENV['PORT'] || 5000
end

# If the router has more than N=backlog requests for our
# workers, we want the queueing to show up at the router and have the
# chance to go to another dyno. In case our workers are dead or slow
# we don't want requests sitting in the unicorn backlog timing out.
# Also, on restart, we don't want more requests than the dyno can
# clear before exit timeout
listen pipe, :backlog => 36

Advantages described in the comment, but in short, this most importantly allows the server/dyno to feedback to the routing layer that it can't handle more.

Short version of question: What is the best way to maintain these benefits with NGINX in the stack?

Longer version:

With NGIX in as reverse proxy, we lose much of this benefit from tuning backlog.

I assume events.worker_connections is the key config param here. From NGINX doc:

The worker_connections and worker_processes from the main section allows you to calculate max clients you can handle:

max clients = worker_processes * worker_connections

In a reverse proxy situation, max clients becomes

max clients = worker_processes * worker_connections/4

Since a browser opens 2 connections by default to a server and nginx uses the fds (file descriptors) from the same pool to connect to the upstream backend
  • Does this mean in the case of this buildpack's configuration that the NGINX 'backlog depth' = max number of clients is 1024 = ( 4 * ( 1024 / 4) ), or does this math change given the router layer's handling of connections to and from the client?
  • Do connections for which NGINX is buffering the request body or responses count against this worker_connections?
  • What happens when NGINX max clients is hit? (e.g. is the next request gracefully redistributed to another dyno?)
@ryandotsmith
Copy link
Owner

Hello, @eprothro
Sorry for the delayed response. This week was incredibly busy for me. Anyways, you raise some really interesting questions. Lets try and tease them apart to gain a better understanding of what is going on.

First of all, while I greatly value theoretical optimizations, they are no replacement for experimental evidence. Have you tried benchmarking a Unicorn/NGINX/Heroku stack with various permutations of the backlogs? If so, care to share your results? It has been a while since I did this type of experimentation, and the last time I did, I neglected to publish my results. Perhaps we can collaborate on such a project.

Secondly, what is the problem that you are trying to solve? Do you have latency problems? Are you trying to trim down your 99th percentile latency? Do you have connection timeout issues? Is the Heroku router returning H12s?

Finally, let me take a stab at addressing some of your direct questions:

Does this mean in the case of this buildpack's configuration that the NGINX 'backlog depth' = max number of clients is 1024 = ( 4 * ( 1024 / 4) ), or does this math change given the router layer's handling of connections to and from the client?

Correct. Since NGINX is in a reverse-proxy mode, and since we have worker_connections set to 1024 and since we are using 4 worker processes, the maximum number of clients we can handle is 1024.

Do connections for which NGINX is buffering the request body or responses count against this worker_connections?

Absolutely. This is one of the great advantages of using NGINX in the way we have it configured. If you are working with slow clients, NGINX will use one of its worker connections to handle the byte transfer. This transfer happens prior to occupying a slot in the Unicorn backlog (i.e. the listen backlog on the socket).

What happens when NGINX max clients is hit? (e.g. is the next request gracefully redistributed to another dyno?)

From the docs, it appears that a HTTP 503 response is issued. I would like to test this locally to make sure.

@eprothro
Copy link
Author

Hi @ryandotsmith, thanks very much for your response. My sincere apologies for just now getting back to you. I had to revert to not using NGINX and move on from this problem, and then I also just recently had a new kiddo.

However, I at least owe you a response:

To answer your question directly, no I didn't have time to benchmark tweaking the backlog with NGINX in the stack. I had already spent a few days (that I didn't have to begin with) benchmarking potential correlations between client bandwidth and request queueing.

I'd be more than happy to collaborate with you as time allows, but I also know that we're both very busy ;).

Ultimately, the problem I'm trying to solve is 'zombie' dynos (100% of requests sent to a dyno result in an H12 for anywhere from a few seconds to a few minutes). I've inherited a couple different poorly performing applications over the years that suffered from this problem. Recently, for a slew of reasons, I started to wonder if client bandwidth could be a factor.

I hesitate to mention this as the problem I'm trying to solve because the implications and discussion/troubleshooting around those implications are so application specific and way outside this scope of this issue. Moreover, the 'correct' answer here is 'make the app's slow actions faster', however accomplishing that in a timely manner isn't necessarrily possible for a lot of reasons not worth going into.

However, I don't feel limiting the backlog is only a theoretical optimization given Heroku's random routing algorithm and my experience. Given either of the above applications, when I limit the backlog depth (using Unicorn's configuration), I've seen (experimentally, with real traffic) that the frequency and impact (number of timeouts before the H12s stop) are both reduced with a very small backlog (like 2N, where N=unicorn concurrency).

This makes perfect sense, statistically, to me; and I'd be happy to discuss if desired.


However, it sounds like trying to achieve this (configuring the stack to use backlog depth to achieve load balancing) with NGINX in the stack would be a problem, as when max backlog is reached, instead of refusing the connection from the router mesh, it responds to the client with a 503.

Hope this helps with some context.

AVDiv referenced this issue in AVDiv/nginx-buildpack Dec 23, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants