Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
issue 259/260: incremental resync #311
issue 259/260: incremental resync #311
Changes from 9 commits
d26e4d3
c22f328
ebde05c
e9a322a
30cc507
62deca7
ff76509
8797fe2
e9433b7
4130eca
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need to do Context and read count etc, which is reimplementing future/promise right? Suggest to create promise and move promise to callback and do setValue there. We don't need mutex, cv and atomic count explicltly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We discussed this, somehow I was hitting hang while using the future/promise by collectAll* (tried a few variations) here (the receiving side doing async_write is already calling collectAllUnsafe). I think it is because here it is the server thread and collectAllUnsafe() somehow will block which will cause the hang? We can increase the server thread number to see if it can be resolved.
src/lib/manager_impl.cpp
27 constexpr auto grpc_server_threads = 1u;
But on the receving side doing async_write, it is the client thread and the batch append anyway blocks there if the data written is not completed yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: if the first rreq is already larger than max_batch_size we have a no-op
fetch_data_from_remote
though there seems a special handling infetch_data_from_remote
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Normally I tend to update size after certain operation has been completed, for the ease of error handling, etc.
But you have a good point here, if a single rreq (not necessarily the 1st one)'s remote_blkid addresses as large as its maximum blk count, which is 65535, it could be as large as 64MB in nuboject case (1K blk size) and could be 256MB for a 4KB blk size, and we have to handle this case. This will go away if we use GRPC streaming API which we probably also need for baseline resync. Will mark as TODO for now as this could become a moon point if we switch to streaming API.
Will come out a new test case (in next PR) with one rreq's remote_blkid addressing 65535 blks, which can be replication's limit test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: We were passing earlier as a vector pointer instead of vector by value is because, down the line we capture this vector in collectAllUnSafe().thenValue() and we do expensive multiple allocation. You might argue that this function is not in critical path and don't mind this additional capture. My only thinking is for any reason we use this function in future, may be it might be helpful, but its an nit optional comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can still do std::move on a vector to avoid allocation (line:440 also does it), right?
It is a vector now because of the staging handling, and we can't not do start_indx, end_index with a pointer to vector as it won't work for std::move for the whole vector.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure if originator down, how we switch to current leader.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sanebay If originator is down, those requests are timed out. But later when a new leader is elected, that leader will send the new data and will create a new rreq and fetch data and commit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the originator is removed and new leader is elected and its sending resync, wont we keep timing out for each entry because we dont who the leader is. We wont be able to move forward. One option is to first for check for raft who is the leader and if not found, ask all replica's who the leader is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is explained at line:485. The new leader will re-send the append entries with originator set to this new leader. We just need to handle failure here if the originator goes down.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any case that we can error out here but not getting new append entries? e.g temporary network flips? or even a restart before raft timing out (so that leader doesnt change)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
responded in other place for this comment (looks like a duplicated one?)