-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add task to create a pending enterprise learner for an assignment #273
Conversation
29d3ee5
to
fc7a1e2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good, couple of random thoughts.
if isinstance(exc, HTTPError): | ||
# The failure resulted from too many retries. This fact would be a useful thing to record in a "reason" | ||
# field on the assignment if one existed. | ||
logger.error('The task failure resulted from too many retries.') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't quite understand what this means - are you talking specifically about 429 HTTPErrors, or celery task retries? And could there be HTTP errors that aren't about rate limiting that cause this task to fail?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An LMS/enterprise API response with any 4xx/5xx status code will cause the celery task to retry. If execution enters on_failure, then we can infer that it was the result of exceeding max_retries if the exc is any of the following:
enterprise-access/enterprise_access/tasks.py
Lines 20 to 26 in c9681b7
autoretry_for = ( | |
RequestsConnectionError, | |
RequestsTimeoutError, | |
HTTPError, | |
OperationalError, | |
BrazeClientError, | |
) |
If anything, I should probably write this like:
if isinstance(exc, HTTPError): | |
# The failure resulted from too many retries. This fact would be a useful thing to record in a "reason" | |
# field on the assignment if one existed. | |
logger.error('The task failure resulted from too many retries.') | |
if isinstance(exc, (RequestsConnectionError, RequestsTimeoutError, HTTPError, OperationalError)): | |
# The failure resulted from too many retries. This fact would be a useful thing to record in a "reason" | |
# field on the assignment if one existed. | |
logger.error('The task failure resulted from too many retries.') |
So, as much as I'd like to refactor LoggedTaskWithRetry to be more specific about which specific status codes to treat as retryable, I think that's probably out of scope since it would either mean impacting all other consumers of LoggedTaskWithRetry, or adding configurability to LoggedTaskWithRetry that could fit into a different ticket/feature request.
7b3b401
to
103d206
Compare
I searched the world wide web again and discovered that there's actually a sanctioned way to unit test celery workers using a celery-provided fixture that spins up a mini celery worker in a separate process (see conftest.py). This made it way easier to test the celery task retry logic. |
26329d9
to
22313db
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is great, great find on that pytest celery thing. One small request to shore up a unit test.
22313db
to
caad9f9
Compare
caad9f9
to
97baccb
Compare
ENT-7539