-
-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Artifacts introduced due to piping chunks encryption #505
Comments
Hello @sylido , Needs to mentioned, what
|
Hi @dr-dimitru thanks for responding so fast.
Client side code in order
Server side code as follows
|
@sylido thank you for update. So you should reverse encoding logic:
|
Thanks @dr-dimitru, just a clarification here, do you want me to set the chunk size manually, instead of dynamic - i.e. 1024 * 256 for a 256kb chunk size. Then using that chunk size, read the chunks from the HDD and decode each one of them, then merge the decrypted contents and encrypt the result once. Afterwards during download it should get decrypted only once. |
Right.
Yes. Ping me if you will have any further issues. |
Hello @sylido , Any progress on this one? |
Hi @dr-dimitru, I tried implementing it the way you suggested, but I'm encountering an issue where the bytes size for the chunk size when getting the data balloons to a different size after getting encrypted. Subsequently I think that causes the decryption on the server size to read less data than needed, trying to figure out how to match the chunk size before decryption to the chunk size after encryption. I was wondering if there is a way to group up the chunks through the piping function into one and then do encryption on the whole thing ? I'll update here on any progress I make. |
I'm afraid, it will be easier to encrypt the whole file, reading it with FileReader, then upload its result as WDYT? |
I continued trying to get To get the proper bytelength that needs to be read on the server I used a So I switched to what you suggested, reading all the file contents with FileReader and using a WebWorker to do the actual encryption. This was done to unblock the browser's UI as it's completely blocked by the encryption - which could take upwards of 10 seconds depending on the file size. Afterwards I gave the encrypted contents to the |
@sylido thank you for update.
|
Thanks a lot @dr-dimitru. Feel free to close the issue or I will if you think it's too niche of a case to spend time debugging, just let me know ! |
Yes,
My thoughts:
|
|
Hello @sylido , Thank for sharing your thoughts on it. For now marked as [DOCUMENTATION] I'll try to extract useful info we have got during discussion into usable article about file encryption. |
When uploading a file and setting streams/chunks to dynamic or just chunks to anything smaller than the size of the file getting uploaded - artifacts are introduced into the file. During download they are quite apparent, if it's an image file that is - binary data.
On the client side we use the pipe function to encrypt the data before it reaches the server. Seems like every chunk gets encrypted and then somehow the chunks get merged when storing the file on the server. The file is then decrypted and encrypted once again. When the file is then downloaded the artifacts are present - i.e. missing pixel data, wrong pixel data.
If the chunk size is set to a number bigger than the size of the file, the piping function encrypts only once and it seems to work just fine in that case. The problem is that having huge chunks pretty much freezes the browser while it's getting uploaded - which can take minutes.
Am I doing something wrong or is there an issue with binary data getting preserved correctly when being chunked ?
ostrio:[email protected]
[email protected]
Seems like a Client issue
The text was updated successfully, but these errors were encountered: