Backport PR #16204 to 8.14: Avoid to log file not found errors when DLQ segments are removed concurrently between writer and reader. #16249
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Backport PR #16204 to 8.14 branch, original message:
Release notes
Bugfix, avoid to log file not found errors when DLQ segments are removed concurrently between writer and reader.
What does this PR do?
Rework the logic to delete DLQ eldest segments to be more resilient on file not found errors and avoid to log warn messages that hasn't any actionable job for the user to solve.
This commit reimplement the comparator used on DLQ reader side to identify the fully consumed segments when
clean_consumed
is enabled to avoid logging warn messages of file not found exception. That condition could manifest when also the writer side deletes segments to satisfy thedrop_older
storage policy.It also updates the
deleteSegment
method so that in case of removal of not existing file, no warning logs are emitted, being a condition that could happen during the execution.Why is it important/What is the impact to the user?
This PR avoid to warn the user with message logs that in reality could happen in normal execution flow. If reader and writer are deleting on same segments set, it's a possible condition that one of the two experiment a ghost file, a file that during listing is present but is not yet present on actual file operation, because another pipeline already eliminated it.
Checklist
[ ] I have made corresponding changes to the documentation[ ] I have made corresponding change to the default configuration files (and/or docker env variables)[ ] I have added tests that prove my fix is effective or that my feature worksAuthor's Checklist
How to test this PR locally
Create an index in an Elasticsearch cluster that's closed, so that it generates events in DLQ. Then configure one upstream pipeline with DLQ enabled and storage policy set to
drop_older
, and another downstream pipeline withclean_consumed
true. In this way the two pipelines conquer the access to DLQ's tail segments, generating the reported error.Use the following pipeline definitions in your
config/pipelines.yml
:is useful to generate the error, because it' asks for a DLQ that's not bigger than 2 segments (10 MB per segment by default configuration) and is more probable that the error manifests.
Related issues
Use cases
Screenshots
Logs
Example of the error that this PR resolves.