Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JupyterLab returns HTTP 413 error when attempting to upload data #25

Open
vvcb opened this issue Mar 26, 2024 · 7 comments
Open

JupyterLab returns HTTP 413 error when attempting to upload data #25

vvcb opened this issue Mar 26, 2024 · 7 comments
Assignees
Labels
bug Something isn't working

Comments

@vvcb
Copy link
Contributor

vvcb commented Mar 26, 2024

When attempting to upload data via JupyterLab (and this is also likely to be the case when using RStudio or Code interfaces), a HTTP 413 error is returned when attempting to upload large files.

This can be fixed by setting the following annotation on the nginx ingress to an appropriate value. See here for more information.

e.g. nginx.ingress.kubernetes.io/proxy-body-size: 64m

@vvcb vvcb added the bug Something isn't working label Mar 26, 2024
@vvcb vvcb self-assigned this Mar 26, 2024
@vvcb vvcb closed this as completed in 3cf881b Mar 26, 2024
@vvcb vvcb reopened this Mar 26, 2024
@qcaas-nhs-sjt
Copy link
Collaborator

I see you've already put a fix in for this? I can see it in the sandbox environment have you assigned to me because it needs propagating into prod? And more importantly why are we both awake looking at this stuff at silly o'clock in the morning?

@vvcb
Copy link
Contributor Author

vvcb commented Mar 29, 2024

I need it in prod but you are right - don't need it at 4am!

@qcaas-nhs-sjt
Copy link
Collaborator

Right so I had a look over this and it the production system was reporting that it couldn't apply the kustomization:

lscsde-config   jupyter                 49d   False   StorageClass/jupyter-default dry-run failed, reason: Invalid: StorageClass.storage.k8s.io "jupyter-default" is invalid: [provisioner: Required value, provisioner: Forbidden: updates to provisioner are forbidden.]

The reason it didn't work when you did it is because it was taking over a bunch of changes made to support the rework of the deployments but the production environment doesn't have the dependancies required to make this work yet. only sandbox has been updated fully to support this so far and it's quite a big change so there are a few things I need to do to ensure that nothing important ends up broken so I'll be wanting to get that scheduled in for dev and production thereafter.

In the meantime what i've done is taken a cut of the branch from the last working production release of the jupyter flux which was:
release/0.2.55
to
release/0.2.55-hotfix-20240329-001

I've cherry picked your commits only into this branch and have updated the prod branch on iac-flux-lscsde to use this new branch

PS C:\Users\shaun.turner\Documents\Sources\NHS> kubectl get kustomization/jupyter -n lscsde-config    
NAME      AGE   READY   STATUS
jupyter   49d   True    Applied revision: release/0.2.55-hotfix-20240329-001@sha1:416f4d9b6efa8dcf2c290d19a2d90c259a6b780c
PS C:\Users\shaun.turner\Documents\Sources\NHS> kubectl get ingress -n jupyterhub -o yaml             
apiVersion: v1
items:
- apiVersion: networking.k8s.io/v1
  kind: Ingress
  metadata:
    annotations:
      nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
      nginx.ingress.kubernetes.io/proxy-body-size: 64m
      nginx.ingress.kubernetes.io/ssl-redirect: "true"
      xlscsde.nhs.uk/dns-record: jupyter
...

@vvcb
Copy link
Contributor Author

vvcb commented Mar 30, 2024

@qcaas-nhs-sjt , thank you for sorting this.

Can you please let me know when you have fixed the production dependencies? We now have live projects on the prod environment and need to be able to configure workspaces (compute/memory/GPU/etc ) fairly quickly to keep the researcher experience reasonably slick.

For instance, my last commit that changes the vCPU/mem limits for the default workspace has not made it into prod.

@qcaas-nhs-sjt
Copy link
Collaborator

To get this upgraded to the same pathway we will need to do a full upgrade of the environment, this will involve downtime and so we will need to get this scheduled in send out comms etc. I'm not sure what's involved with this as we still don't have an official process for this. Because of the scope of these changes i'd say we will want a good 4 hours of potential downtime, though hopefully won't need that much I'd rather be safe than sorry

@vvcb
Copy link
Contributor Author

vvcb commented Apr 2, 2024

Can we use kubectl to query for all current users of the TRE and send them a bcc email advising of downtime?

Stopping the TRE for a day shouldn't be an issue at all but informing people beforehand will avoid unnecessary "TRE is not working" emails.

@qcaas-nhs-sjt , are you happy to schedule this for tomorrow if everything is ready from your side?

Also, happy to discuss how we do this going forwards. We can modify the announcement banner on the JupyterHub landing page in addition to the email.

@qcaas-nhs-sjt
Copy link
Collaborator

Email has been send on this, though as tomorrow is a Wednesday I have scheduled this in for Thursday 4th

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants