-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Message 'Not Implemented' #7
Comments
@DonRichards, thank you for submitting the issue! Based on the error message you provided, it seems to originate from the AWS store. Unfortunately, the error message "Not implemented" is difficult to interpret. Could you please provide me with the version of Dataverse you are using? I ran some local tests using Dataverse 6.0 and Localstack, which act as a simulation of AWS. However, I was unable to replicate the error. Both direct uploads to the S3 store and the native upload path worked. I plan to conduct further testing on an actual AWS store and hopefully identify the bug causing the issue.
As far as I know, the UI does not support direct uploads to an S3 store. Therefore, the uploads are done through the standard HTTP method available in DV's native API. This clarifies why the UI functions properly and suggests that the issue might lie with the AWS store. |
I found something odd when I changed a variable name within my code I got a different error. From When I examined the files being passed to the upload it looks like this. Do these values look correct? fileName and file_id I would expect to have something.
|
@DonRichards this is expected since Can you share the error message you have received upon changing variable names? |
I came across this issue on StackOverflow, and found a solution provided by another user. I will implement the fix and create a pull request to see if it resolves the issue. May I ask about your file size to test this on another server? |
Each of the 401,000 files I'm attempting to upload with a single DOI is approximately 1.6MB in size. I have a script that is breaking them up in batches of 20 at a times so the uploader should only be given a list of 20 files. Any idea what I can do from here to get this to work? I'd create a PR if I could but I don't know this app well enough. |
Great, thanks for the info! The PR is almost ready for submission. I'll run some tests on Demo Dataverse to check for any issues. Once I'm done, I'll let you know and you can test the updated version. Hope this will fix it 😊 |
Great! Thanks for that! |
@DonRichards, I have created a pull request #8 that fixes the issue. Unfortunately, the issue is related to streaming files to the S3 backend. AWS is not capable of handling async streams, which is a pity. To test this, I downloaded a sample FITS file and replicated it 2000 times to simulate a case similar to yours. The error has not been raised on our test server, and the upload works. The upload to S3 itself is quite fast if you set Can you test and verify that it works on your side? Regarding the bulk upload in general, would it be an option to use Dataverse's native upload instead? This library supports automatic zipping of multiple file batches of max. 2 GB, which are unzipped on Dataverse's side if the direct upload is not enabled. This way, you may overcome the additional time to register files using direct upload. |
|
Not sure what this error indicates.
I'm trying to upload FITS files to a DOI. I can use the UI and it uploads without an issue.
I'm not sure if this is a problem with the uploader, the way I'm upload, the dataset in general, etc. Apologies if this turns out not to be associated with the python dvuploader. I assume it does since I can use the UI for the same files.
The text was updated successfully, but these errors were encountered: