-
Notifications
You must be signed in to change notification settings - Fork 207
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory retention causing page to crash #437
Comments
I am facing the same issue and according to the following, this should be related to the md5 method or the sha256 hash: |
Well, you're providing the md5 function yourself. I can tell you what we're doing, since the spark and aws lib methods are problematic. I've shared a code that uses node's |
Well, i never used webpack before and i don't really understand how to do this. Can you give me an example? |
I am observing a similar problem when uploading a 8.3gb file. I tried the SparkMD5 and CryptoJS hash functions and different partSize (going as low as 1mb). I tried blueimpMD5 but that crashes even before the upload begins. In all cases, memory use keeps increasing, so it's probably a memory leak. With Chrome, the browser crashes after approximately one gigabyte is uploaded. Firefox manages to upload the file but the browser interface slows down dramatically after 5gb or so, becoming completely unresponsive around the 8gb mark (thankfully the upload continues).
|
@dtelad11: Same here, tried the same hash functions with the same results. Did you find a solution to this? |
Use Firefox ... |
Thanks for the debug @dtelad11, this might be a bit harder to debug. I'll try to allocate some time and have a look at it. Any chance you tried debugging it in Chrome DevTools? There is a possible part of the code where the reference to md5d stuff is not deleted. The code is pretty old and ugly though :P I'll see what I can do. Any help is welcome 🙂 |
Thanks @jakubzitny for looking into this! I did not try to debug this in Chrome DevTools, I'm afraid. Full disclosure, I'm not very adapt at JS (I'm a data scientist), so that's as far as I could go with debugging. Sorry I couldn't be of more use! |
@jakubzitny Hi, any news ? |
Same issue here. I'm willing to help fix the issue but I need a bit more insights. |
Thanks for the comments. Regarding the md5 part, I am not really sure, I did not write the code and it is messy. And I did not manage to find proper time for this. Could we start with creating a minimal repro where this is always happening @VincentCharpentier @mtltechtemp? |
I'll try to do that this week |
Ok, so I made this sandbox: This is only a form to upload with any AWS and signer config. I would appreciate if someone could test it and let me know if memory usage is indefinitely increasing during upload Sorry I cannot provide more for now 😞 |
Do you know if there's a way to do the signing process locally ? I could add a field for the private key also and sign chunks from the client |
Yes it is possible, you should use the customAuthMethod. What is the ~file size that you're seeing the leaks, please? |
With Google Chrome I can notice memory usage increasing due this process "Google Chrome Helper (Renderer)"; on a Mac, but I got similar reports from customer with Windows. |
Were you able to reproduce the issue with the setup ? |
I tried to update the demo code to use a local customAuthMethod, but I get a strange error when I try to upload:
@jakubzitny can you help ? |
Thanks for the effort @VincentCharpentier, I didn't really get to this.. I've never seen this error, it seems to be failing on
which would mean the signer is set up incorrectly, but it is quite weird.. |
I'm only using the private key to generate hmac in Do you see anything wrong in the import crypto from "crypto";
function hmac(key, value) {
return crypto
.createHmac("sha256", key)
.update(value)
.digest();
}
function customAuthMethod(
signParams,
signHeaders,
stringToSign,
signatureDateTime,
canonicalRequest
) {
return new Promise(resolve => {
const timestamp = signatureDateTime.substr(0, 8);
const date = hmac(`AWS4${priv_key}`, timestamp);
const region = hmac(date, awsRegion);
const service = hmac(region, "s3");
const signing = hmac(service, "aws4_request");
resolve(hmac(signing, stringToSign));
});
} |
Okay, that doesn't look very good. The You should prodivde the secret to all things you're hasing, right? Do you see it @VincentCharpentier? |
I'm not really providing date, region etc as key, I'm providing the result of previous hmac calls as key. This code is already used in production on the api to sign requests and is working. function customAuthMethod(
signParams,
signHeaders,
stringToSign,
signatureDateTime,
canonicalRequest
) {
return hmac(
hmac(
hmac(
hmac(
hmac(
`AWS4${priv_key}`,
signatureDateTime.substr(0, 8)
),
awsRegion),
's3'
),
'aws4_request',
),
stringToSign,
);
} |
But honestly I don't know where that signing method comes from and where to find the relevant documentation used to write it |
You can remove the email. And yeah, the signing is incorrect, you want to sign the content just once. So just create it and sign it. The signing method is the thing that you have, that's the hmac. And the private key is the key from amazon. |
Has anyone found a solution to this, or perhaps an alternative? We are experiencing this as well. I tried digging through the code, but could not find a solution, but I do as well think this relates to previous hashes not being cleaned up. |
I'm sorry I don't have any more time for this issue atm; I think I will just migrate to another implementation at some point but that's gonna be a very long process |
Just a note here, Chrome 79 (latest right now) seems to handle upload well without memory leaks. |
Tested it here on my end. Still experience the problem in Windows with Chrome, even with Chrome 79. The issue does not seem to be relevant on macOS though. |
Hi. Just confirmed this issue in Brave 1.3, equivalent to Chromium 80. I'm uploading a profile of the memory, so we can properly debug what is causing this, but testing with a 100mb file, we can clearly see that the memory ramps up until the end of the upload. If we stretch this behavior to a bigger file, in theory, we should eventually see a crash. To check this performance profile, rename the .txt to .json, please. Tomorrow I'll try to take a closer look at it. |
Here are my findings after some investigation: On macOS Catalina, on both Brave 1.3 (Chromium 80) and Chromium 78, there's no memory leak. My hypothesis of yesterday was debunked when I profiled the application again, as once the upload of the part finishes, the memory goes back to normal, except in cases where the request takes longer than expected, which leads to an increase in memory, like one of the screenshots show. I'll try to execute this on Windows now. |
On Windows 8.1 (ugh) and Chrome 72, the dev tools reported more than 90 samplings missing, and because of that, the graph of the performance ends while in the initial uploads: What I witnessed was that when I tried to upload a file of 400mb, once I finished the profiling after 12 minutes of recording, the devtools window crashed and I lost my data. So if I would bet, I'd say that memory leak is really playing a role here. So, I'm not able to right now reach into a conclusion, as I need more data. Tomorrow I'll investigate more, as today I spent almost an hour trying to make localstack and the port forwarding of my Windows VM to work together. |
Thanks for the awesome efforts put into this @mattmoreira ! |
I just tried in Windows 8.1 with Chromium 72, took profilings with a duration of 60, 60 and 110 seconds, and apparently there are no issues with a memory leak. Possibly my crash yesterday happened due to the long duration of my profiling, plus the fact that I ran it in a 2 Gb VM. It would help a lot if you could do small profilings of your upload, so we can better understand what's going on because as it currently stands, I'm unable to reproduce the issue. Once you profile it, please, save it and post it on this thread. Also, please use this link, as I've fixed a problem that the previous sandbox had, as it was trying to upload fakepath, and not the actual file content. I'd recommend as well using localstack to test this, as then you won't have to pay for Amazon's usage and it will be easier to run the test, as authentication isn't necessary. Installing localstack: Starting localstack: Creating a bucket with localstack: Data to use in the form fields: |
Hi, I just asked our support team and it seems that it's been a long time without customer complaining about this. Without any code change on our side, I would assume this was fixed with browser updates. |
I'd say that we should give it another try. @VincentCharpentier, do you know what was the OS of the affected users? Also, could you please download Chromium 72 and try it on Windows 7? Calling @mtltechtemp, @dtelad11, and @Jrubensson, as you all experienced the same issue. Are you able to reproduce this with the link that I provided? If so, what are your computer specs, browser version, and OS version? |
@mattmoreira: First of all, thank you for working on this. Very much appreciated. I will try to give it a test later today. The specs I had were I saw the issue was with Windows 10 and Chrome 79. It was tested on a computer running 16 GB of ram with a Ryzen 2600k CPU. |
All, I wrote the second re-write and went through the same convulsions as you are regarding memory usage. The fact of the matter is that EvaporateJS relies on the browser's implementation for doing the heavy lifting. When we looked at potential memory leaks about 2-3 years ago, I concluded that the EvaporateJS code was correctly releasing memory and that to dig into memory issues for each browser, was a fool's errand. I am not claiming that much can be improved, but in the end, to make the multi-part transfers work, we have to ask the browser to allocate a bunch of memory and trust that when we release it, the browser does it's part. I concluded that some browser implementations hadn't catered to the the demands of multi-part uploads and perhaps were a little lax in their own testing strategies. And thank you all for your support of the project. |
Given the outlined scenario, I suggest that we close this issue, as we're unable to advance any further. To avoid this kind of situation in the future, we could maybe establish that to open memory-related issues, they should have a performance profiling and memory snapshot attached. This way, even if we're unable to reproduce the proper failure, we'll be able to see where it is happening. |
As an EvaporateJS customer, I want to thank everyone on this thread for the substantial amount of work invested. I did not expect the rabbit hole to go so deep. I wholeheartedly agree that the package cannot be held responsible for browser-specific memory issues. Personally, I am still seeing memory leaks with Chrome. However, we have informed our customers that this issue lies with Google and that they should just use Firefox for large files. This is enough of a solution for us. Once again, my deepest gratitude for your investigation! |
I'm having a serious issue about memory retention on Chrome the page dies at some point. The plan is to upload large files that could go up more than 150GB but a 10GB file is not even possible it could be higher tho depending on computer specs. Please help me out! Thank you.
The text was updated successfully, but these errors were encountered: