Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add chunk transfers #100

Open
wants to merge 1 commit into
base: develop
Choose a base branch
from

Conversation

axlbonnet
Copy link
Contributor

To have a mature data module, it is necessary to have a way to exchange big files. Currently CARMIN only allows to exchange whole files, and this isn't a viable solution with big file on HTTP.

I propose to add ways to download and upload files by chunks in CARMIN.
This is pretty sraightforward for the download side with 2 new GET parameters : offset and size to specify a range of bytes : GET /path/mybigfile.zip?action=content&offset=110000000&size=10000000

For the upload size, this need deeper changes. An upload by chunk is initialized through the POST /path/mybigfile.zip with some payload :

{ "size":  123456789 }

This only declares a new upload, there zero byte sent. The platform returns an enriched Upload object ;

{"identifier":"upload-xxxx",
 "size":  123456789,
  "transfered": 0,
  "platformPath": "/mybigfile.zip",
  "endDate" : 1530215177 }

Then the user sends the chunks one by one on POST /upload/upload-xxxx. On every chunk, the platform updates the transfered field. On the last chunk, the upload is over and a Path object representing the new file is returned.

This is a draft opened to discussion.

@sapetnioc
Copy link
Contributor

I think partial download is a very good idea.

For the upload, I wonder what is supposed to happen if the client send overlapping chunks ? This is a bad idea to do that but if the API allows it, the server must be ready to manage bad clients. What about a more restrictive API that would make this impossible ? The call to POST /path/mybigfile.zip would add a chunk size in its payload :

{
    "size":  123456789,
    "chunk_size": 12345
}

Then, the client would only be allowed to transmit non overlapping chunks (all having the same predefined size except the last one) that will be concatenated in transmission order to build the file.

@axlbonnet
Copy link
Contributor Author

axlbonnet commented Jun 29, 2018

With what I proposed, it is not possible to send overlapping chunks. It works in the same way than what you propose except that each chunk one can have a different size.
I'm OK to impose a fixed size, that would give a way for the server to verify the chunks sent and to detect errors.

@sapetnioc
Copy link
Contributor

Ok, I misunderstood. I thought that there was an offset for uploading too. Therefore, I do not know if it is necessary or not to impose a fixed chunk size.

@glatard
Copy link
Contributor

glatard commented Jul 9, 2018

Discussed on July 9th.

  • Add a property in platformProperties to make it optional.
  • Then merge.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants