Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Solution for "generic Error: Request has been terminated" - increase RAM for Proxy #58

Open
shaunanoordin opened this issue Sep 8, 2020 · 0 comments
Labels
documentation Improvements or additions to documentation

Comments

@shaunanoordin
Copy link
Member

shaunanoordin commented Sep 8, 2020

Problems & Solutions

Hello! If you're reading this, you probably tried to fetch an ML Task's results from the Microsoft server at Step 2 (e.g. https://subject-assistant.zooniverse.org/#/tasks/ad68546e-9ec4-4bed-a023-b779cb7fc40f), but encountered a generic error such as: Error: Request has been terminated Possible causes: the network is offline, Origin is not allowed by Access-Control-Allow-Origin, the page is being unloaded, etc

Step 2 of Subject Assistant, showing a generic error

See, if the ML Task encountered an error on the MS server side, the front end would have said something like "ML Task encountered an error and could not be processed." If the ML Task has expired, or if the ID is incorrect, the front end whould have said "Error: ML Task could not be found."

A generic error message means something unexpected has happened. As per issue #55, we've discovered that one reason this might happen is that the Proxy Server ran out of memory trying to fetch a VERY LARGE ML Task results file

Side Note: Proxy Server - the Subject Assistant (which sits at *.zooniverse.org) requires a proxy server to fetch results from the Microsoft server (which might sit at *.windows.net, for example) due to the cross-domain security on most browsers.

Debugging, etc

Issue: Clicking "Fetch" at Step 2 results in a generic error.

Analysis: it might be a Proxy memory issue.

How to check if the Proxy Server is encountering memory issues:

  • Go to Step 2 and attempt to fetch an ML Task's results, e.g. https://subject-assistant.zooniverse.org/#/tasks/INSERT-UID-HERE
  • Click Fetch.
  • If the generic error appears, look into your browser's Network tab to find a proxied GET request to Microsoft to, e.g. https://subject-assistant-proxy.zooniverse.org/?url=https%3A%2F%2Fcameratrap.blob.core.windows.net%2Fasync-api-zooniverse%2Fbleep-bloop-bleep-blap
  • Open that URL.
    • If that page shows a JSON file, everything is working as normal?
    • If that page shows a 502 error, it's probably a RAM issue.
  • Open the proxy target URL, e.g. https://cameratrap.blob.core.windows.net/async-api-zooniverse/bleep-bloop-bleep-blap
    • You should receive a legit JSON file. If not, there's another problem.
    • Download the JSON file and observe the size. How large is it? How many entries are in there?
  • Try running an ML Task with a way, way smaller Subject Set.
    • A very small subject set (e.g. 100 images) should work with no issues. You should be able to submit it to the MS server via Hamlet, and then fetch it via the Subject Assistant Step 2.
    • If there ARE issues, there's another problem.

Solution for increasing Proxy Server memory:

  • Go to kubernetes/deployment.tmpl and bump up the memory limits for subject-assistant-proxy-app
  • Reminder: this project is on auto-deploy, so you can just edit the file directly.
        - name: subject-assistant-proxy-app
          image: zooniverse/zoo-ml-subject-assistant:__IMAGE_TAG__
          resources:
            requests:
              memory: "100Mi"  // This was 50Mi previously
              cpu: "10m"
            limits:
              memory: "100Mi"  // This was 50Mi previously
              cpu: "500m"

Note:

  • 100 Mi is enough to handle fetching JSON files up to 15.6MB, though we haven't figured out what's the upper limit to this yet.
@shaunanoordin shaunanoordin added the documentation Improvements or additions to documentation label Sep 8, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

1 participant