-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add long running log store tests. #419
Conversation
6f1a673
to
0db6b23
Compare
0db6b23
to
54dd65e
Compare
We also need to add flip point when we add new chunks to the journal_vdev, in which we do persistence of the private data and think it through what will happen on reboot and verify with a test case. |
Ideally we should have a long running either at log store level or journal vdev level, that we do truncate periodically for a few hours and make sure it is running fine. We can review these and see if there is anything comment from others and create issues (not necessaries to be included in this PR). |
test_log_store_long_run.cpp is doing that what you mentioned. Will have to run it on 85 namespace. |
Added this in the doc. |
54dd65e
to
52e158d
Compare
Right, is the truncation point ramdomized? |
Better to create issue otherwise it will lose track which are already there which are todos. |
Yes test_log_store_long_run.cpp:459. |
Created a ticket to use version instead of created time (#441) |
52e158d
to
a728a20
Compare
Fix truncation issues on boundary cases. Release chunks if truncate cross end of chunk boundaries. Enable logstore test except the parallel write and truncate test case. Truncate can cause data start to go to next chunk start offset. Change truncate api to return that offset.
a728a20
to
e8acbc2
Compare
Changes.
More details.
Journalvdev maintains a list of chunks to store all the log entries. All append log entries are appended to the last chunk in the list(right side/tail offset) and truncate is applied to the head of the chunk list(left part/ data start offset). Whenever we append log entries, if we dont have enough space, we create a new chunk and append to the list. So log groups(batch of log entries) dont go across chunks. So there will be holes in these chunks at the end which are marked by end_of_chunk in chunk private data. The hole lies between end_of_chunk and (chunk_start + chunk_size) . When we read and we reach this hole, there is no data so we skip and move to the next chunk. Similarly if truncate happens to be in that hole also, we release the whole chunk and move to the next chunk. Also we set the data_start_offset to the start of the next chunk.