-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added error handling when a chuck is not downloaded from an Azure blob #19
base: main
Are you sure you want to change the base?
Conversation
- To download a file from an azure blob we use the libraries "github.com/Azure/azure-pipeline-go/pipeline" and "github.com/Azure/azure-storage-blob-go/azblob". We divide the file into chunks and download it chunk by chunk until we download the whole file. For each chunk, we can retry up to 20 times. However, if the download of a chunk fails in all of 20 times (for example due to connectivity issues), we do not handle and report the error. This patch stores all the errors when downloading a chunk and in case all 20 retries have failed it stops the download and reports the error. Signed-off-by: Ioannis Sfakianakis <[email protected]>
7c95a36
to
d84ba7d
Compare
} | ||
}, | ||
}) | ||
if err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this if err != nil {
is not needed. As I understand it, dr.Body
is just constructor and download starts below with io.CopyBuffer
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, is this async in some way? I haven't looked at the azure API in quite some time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not know about the whole API, but the way we use it is as follows: We send 16 concurrent requests to download 1MB chunks. When we download all of them, we move to the next 16 chunks until we download the whole file.
I think you need to sign the DCO |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DCO missing
"github.com/Azure/azure-storage-blob-go/azblob". We divide the file into chunks and download it chunk by chunk until we download the whole file. For each chunk, we can retry up to 20 times. However, if the download of a chunk fails in all of 20 times (for example due to connectivity issues), we do not handle
and report the error. This patch stores all the errors when downloading a chunk and in case all 20 retries have failed it stops the download and reports the error.