-
Notifications
You must be signed in to change notification settings - Fork 117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
squashfs: fix received X bytes instead of expected minimal 40 on listing #201
Conversation
The following is the current process:
I think this change of yours says:
Is that correct?
Something in the directory structure must have triggered it. What exactly was it? Could we isolate that? |
Yes that is correct, though in step 4 we may have read enough data already. I'll see if I can create a squashfs which causes the problem - watch this space! |
I managed to make this squashfs which demonstrates the problem. (I had to zip it for upload). It is the tensorflow docker image but with all the files truncated to 0 size and all the symlinks removed. It is 816K so not too big! I could write a test which does a directory traversal of it and check that we can see all the files. What do you think? A good enough test? I didn't manage to make a synthetic test! |
It isn't, although I suspect we could make it smaller. The real question is what is triggering it. Wouldn't a simple image with just one file (and therefore 1 inode, or maybe 2 if you count the directory), but where we call I am trying to understand how this is triggered.
When you run into this problem with your sample above? What actual path is it taking? I assume it is not the first, but the second or third? What is the actual case triggering it? |
7b0779d
to
fb28d9d
Compare
I managed to make a test image eventually! I think it only triggers when the directory reading is
I managed to get this to trigger by using 4k page sizes and 300 files with xattrs (to force them to be extended types). I've added this to the commit. In the process I discovered another bug! There was a small typo in the xattr parsing which I've also fixed - the new test doesn't pass without the fix. |
All about the right combination to trigger the problem. Thank you, indeed, for tracking it down. You have done so much for this PR, I hate to ask for more. Can I ask to expand the comment in buildtestsqs.sh to explain the corner case we are dealing with? No other requests, then we can merge this in. |
When unpacking large squashfs archives you occasionally get this error: received 38 bytes instead of expected minimal 40 This is due to the inode changing type (ie it is actual an extended inode) when read from the directory but the size not being updated. Most of the time there is enough data for the larger inode but not always.
Before this fix the code crashed with errors like panic: runtime error: slice bounds out of range [282:270] [recovered] When reading xattrs. This was caused by a mixup of indices in the code. This fix is tested by TestSquashfsReadDirCornerCases and causes it to run clean.
fb28d9d
to
fe4709c
Compare
I've done that now :-) Let me know if you need any other changes. |
Nope, looks great. Thank you! |
When unpacking large squashfs archives you occasionally get this
error:
This is due to the inode changing type (ie it is actual an extended
inode) when read from the directory but the size not being updated.
Most of the time there is enough data for the larger inode but not
always.
I couldn't think of a way to test this.
I have an archive which demonstrates the problem you can download here: tensorflow.sqfs if you need it.
It was made with this script