Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Core: Ignore split offsets when the last split offset is past the file length #8860

Merged
merged 1 commit into from
Oct 17, 2023

Conversation

amogh-jahagirdar
Copy link
Contributor

@amogh-jahagirdar amogh-jahagirdar commented Oct 17, 2023

Follow up to #8834

This change ignores split offsets when the last split offset is past the file length. This is done as a defensive check to make sure that readers don't attempt to use corrupted split offset stemming from the issue described in #8834. Note: It is possible that split offsets were corrupted and then the last split offset was not past the file length. In that case, it's still acceptable to use the split offsets because it won't break reading logic since the split offsets would be guaranteed to be within the file boundaries.

@amogh-jahagirdar
Copy link
Contributor Author

cc @bryanck

@nastra nastra added this to the Iceberg 1.4.1 milestone Oct 17, 2023
@amogh-jahagirdar amogh-jahagirdar force-pushed the defensive-splitoffset-read branch from cc03e1e to f407974 Compare October 17, 2023 16:21
@amogh-jahagirdar amogh-jahagirdar force-pushed the defensive-splitoffset-read branch 3 times, most recently from a0adedb to 32e94b4 Compare October 17, 2023 16:51
return null;
}

// If the last split offset is past the file size this means the split offsets are corrupted and
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[doubt] wondering if throwing an exception or having a pre-condition would be helpful to identify the buggy writer ?

Copy link
Contributor Author

@amogh-jahagirdar amogh-jahagirdar Oct 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean throwing at read time (i.e. instead of returning null, we throw)? If so, I don't think we want to do that because that's essentially unnecessarily breaking future readers and split offsets are optional anyways; this approach takes the stance of detecting the corruption and making sure read logic doesn't leverage the corrupted metadata.

If you mean throwing at the time of writing the manifest entry (a precondition check in the constructor of BaseFile), I went back and forth on this but I think the problem there is let's say someone upgrades. When some process is performed which rewrites a set of files (including some corrupted entries) it would fail due to the precondition. The benefit is it would prevent spreading the previous corruption which is nice, but at the cost of failing operations. Considering again the corrupted split offsets will be ignored at read time anyways, failing at write time seems needless.

To prevent spreading previous corrupted state, at the time of writing the manifest if the corruption is detected the split offsets could be recomputed (a sort of "fix-up" process). This requires more investigation though, not sure how feasible it is and the perf implications (e.g. for Parquet we'd need to go through the block metadata again)

let me know what you think!

Copy link
Contributor

@singhpk234 singhpk234 Oct 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree with you !

I was mostly coming from the point of view of a buggy writer (that didn't use core-lib as we expose split offsets via ParquetMetadata or purpose fully passed wrong offsets) which has already committed this metadata. Such writers will never be caught because we will be silently skipping the malformed offsets, was wondering if having a warning log then, during reads so that we could let the readers know of the corruptions and reads would be a bit un-optimized, so that it could help in backtracking the buggy writer, thoughts ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like Split Offsets being ordered in ascending order is not being enforced during reads either, we just swallow, we should be fine in this case as well then :P

      if (splitOffsets != null && ArrayUtil.isStrictlyAscending(splitOffsets)) {
        return () ->
            new OffsetsAwareSplitScanTaskIterator<>(
                self(), length(), splitOffsets, this::newSplitTask);
      } else {
        return () ->
            new FixedSizeSplitScanTaskIterator<>(
                self(), length(), targetSplitSize, this::newSplitTask);
      }
    }

@rdblue rdblue merged commit ad602a3 into apache:main Oct 17, 2023
45 checks passed
@rdblue
Copy link
Contributor

rdblue commented Oct 17, 2023

Merged. Thanks for getting this read, @amogh-jahagirdar!

For context, this catches bad metadata written by 1.4.0 and ignores it. This is needed if tables have bad split offsets in order to be read by Spark, Hive, and Flink.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants