Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Throttling, Retrying #69

Open
banksJeremy opened this issue May 16, 2014 · 1 comment
Open

Throttling, Retrying #69

banksJeremy opened this issue May 16, 2014 · 1 comment

Comments

@banksJeremy
Copy link
Collaborator

As discussed in #59, we want to have some universal throttling mechanism for client requests.


Throttling

When we post a new message, or edit one, the requests are throttled, and retried when appropriate. This is good, but doesn't apply to any of the other requests we make. We should generalize the existing code, and make it easy to apply for different types of requests. (There would need to be some code specific to recognizing success/temporary error/fatal error for different types of requests.) Ignoring the implementation for a sec, what behaviour do we want?

It might be reasonable to have two requests queues, one for read requests and one for write requests. That way we can keep seeing updates, even while our chat messages are throttled and being retried. Maybe by default they could limit us to one request per five seconds, or maybe a smaller limit that increases if we keep sending a lot of requests. Or maybe the read queue could allow a couple requests to be in-flight at once, while writing is limited to a single request.

@banksJeremy banksJeremy self-assigned this May 16, 2014
@banksJeremy banksJeremy changed the title Throttling Throttling, Retrying May 16, 2014
@banksJeremy
Copy link
Collaborator Author

I've been trying some stuff, and I'm leaning towards an approach based on the Executors and Futures in Python 3's concurrent.futures module (using the backported version available on PyPi).

I'm trying to implement a class RequestExecutor(concurrent.futures.Executor). Whenever we want to make a request, we request = executor.submit(fn, *args, **kwargs) the request call, which gives us a RequestFuture(concurrent.futures.Future) instance that will get the request result. If we want to block and wait for the request's result or raised exception, we just need to result = request.result(), or can register a callback.

Internally, the RequestExecutor would have a worker thread which runs a single request at a time. It enforces a minimum interval between consecutive request calls. If a request does raise RequestAttemptFailed(min_interval=0.0), it will be retried, up to a maximum number of times, and after waiting at least min_interval. If the request raises any other type of exception, it will fail immediately and will not be retried. (So if we wanted to retry on HTTP errors, our request methods would need to catch those specific errors that we want and re-raise RequestAttemptFailed. We can probably put some of this boilerplate in a decorator, but we don't want to be reckless and retry more types of exceptions than we should.)

As suggested in the OP, this might be used by having a ._read_request_executor and a ._write_request_executor on Client, so that each class of request does not block the other.

(My implementation is at about 150 lines, but it's broken.)

banksJeremy pushed a commit that referenced this issue May 17, 2014
This is a throttling/retrying utility based on backported Python 3
Executors and Futures.

This implementation and test are a mess, but are at least partially
correct.

#69
@banksJeremy banksJeremy removed their assignment Mar 21, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant