-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Core: Ignore split offsets when the last split offset is past the file length #8860
Core: Ignore split offsets when the last split offset is past the file length #8860
Conversation
cc @bryanck |
cc03e1e
to
f407974
Compare
a0adedb
to
32e94b4
Compare
return null; | ||
} | ||
|
||
// If the last split offset is past the file size this means the split offsets are corrupted and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[doubt] wondering if throwing an exception or having a pre-condition would be helpful to identify the buggy writer ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean throwing at read time (i.e. instead of returning null, we throw)? If so, I don't think we want to do that because that's essentially unnecessarily breaking future readers and split offsets are optional anyways; this approach takes the stance of detecting the corruption and making sure read logic doesn't leverage the corrupted metadata.
If you mean throwing at the time of writing the manifest entry (a precondition check in the constructor of BaseFile
), I went back and forth on this but I think the problem there is let's say someone upgrades. When some process is performed which rewrites a set of files (including some corrupted entries) it would fail due to the precondition. The benefit is it would prevent spreading the previous corruption which is nice, but at the cost of failing operations. Considering again the corrupted split offsets will be ignored at read time anyways, failing at write time seems needless.
To prevent spreading previous corrupted state, at the time of writing the manifest if the corruption is detected the split offsets could be recomputed (a sort of "fix-up" process). This requires more investigation though, not sure how feasible it is and the perf implications (e.g. for Parquet we'd need to go through the block metadata again)
let me know what you think!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree with you !
I was mostly coming from the point of view of a buggy writer (that didn't use core-lib as we expose split offsets via ParquetMetadata or purpose fully passed wrong offsets) which has already committed this metadata. Such writers will never be caught because we will be silently skipping the malformed offsets, was wondering if having a warning log then, during reads so that we could let the readers know of the corruptions and reads would be a bit un-optimized, so that it could help in backtracking the buggy writer, thoughts ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks like Split Offsets being ordered in ascending order is not being enforced during reads either, we just swallow, we should be fine in this case as well then :P
if (splitOffsets != null && ArrayUtil.isStrictlyAscending(splitOffsets)) {
return () ->
new OffsetsAwareSplitScanTaskIterator<>(
self(), length(), splitOffsets, this::newSplitTask);
} else {
return () ->
new FixedSizeSplitScanTaskIterator<>(
self(), length(), targetSplitSize, this::newSplitTask);
}
}
32e94b4
to
8f83bdc
Compare
Merged. Thanks for getting this read, @amogh-jahagirdar! For context, this catches bad metadata written by 1.4.0 and ignores it. This is needed if tables have bad split offsets in order to be read by Spark, Hive, and Flink. |
Follow up to #8834
This change ignores split offsets when the last split offset is past the file length. This is done as a defensive check to make sure that readers don't attempt to use corrupted split offset stemming from the issue described in #8834. Note: It is possible that split offsets were corrupted and then the last split offset was not past the file length. In that case, it's still acceptable to use the split offsets because it won't break reading logic since the split offsets would be guaranteed to be within the file boundaries.