Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: parallel partial witness handling in the partial witness actor #12656

Open
wants to merge 8 commits into
base: master
Choose a base branch
from

Conversation

stedfn
Copy link
Contributor

@stedfn stedfn commented Dec 20, 2024

The PR unblocks the main thread of the PartialWitnessActor by detaching the handling of the partial witnesses to separate threads.

This results in a considerable reduction in the distribution latency of the state witness:
image
The image originates from a forknet experiment with 50 nodes, with each state witness artificially padded to reach a size of 30 MB.

Copy link

codecov bot commented Dec 20, 2024

Codecov Report

Attention: Patch coverage is 80.50847% with 23 lines in your changes missing coverage. Please review.

Project coverage is 70.56%. Comparing base (5749548) to head (53246b4).
Report is 1 commits behind head on master.

Files with missing lines Patch % Lines
...idation/partial_witness/partial_witness_tracker.rs 75.47% 11 Missing and 2 partials ⚠️
...alidation/partial_witness/partial_witness_actor.rs 82.14% 7 Missing and 3 partials ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##           master   #12656   +/-   ##
=======================================
  Coverage   70.55%   70.56%           
=======================================
  Files         847      847           
  Lines      172685   172777   +92     
  Branches   172685   172777   +92     
=======================================
+ Hits       121845   121922   +77     
- Misses      45736    45757   +21     
+ Partials     5104     5098    -6     
Flag Coverage Δ
backward-compatibility 0.16% <0.00%> (-0.01%) ⬇️
db-migration 0.16% <0.00%> (-0.01%) ⬇️
genesis-check 1.36% <0.00%> (-0.01%) ⬇️
linux 69.26% <77.96%> (+0.02%) ⬆️
linux-nightly 70.14% <79.66%> (-0.02%) ⬇️
pytests 1.66% <0.00%> (-0.01%) ⬇️
sanity-checks 1.47% <0.00%> (-0.01%) ⬇️
unittests 70.39% <80.50%> (+<0.01%) ⬆️
upgradability 0.20% <0.00%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@stedfn stedfn changed the title parallel psw handling in actor feat: parallel partial witness handling Jan 3, 2025
@stedfn stedfn changed the title feat: parallel partial witness handling feat: parallel partial witness handling in the partial witness actor Jan 3, 2025
@stedfn stedfn self-assigned this Jan 3, 2025
@stedfn stedfn marked this pull request as ready for review January 3, 2025 13:24
@stedfn stedfn requested a review from a team as a code owner January 3, 2025 13:24
Comment on lines -412 to +415
)? {
self.forward_state_witness_part(partial_witness)?;
}
self.partial_witness_spawner.spawn("handle_partial_encoded_state_witness", move || {
// Validate the partial encoded state witness and forward the part to all the chunk validators.
match validate_partial_encoded_state_witness(
epoch_manager.as_ref(),
&partial_witness,
&signer,
runtime_adapter.store(),
) {
Ok(true) => {
network_adapter.send(PeerManagerMessageRequest::NetworkRequests(
NetworkRequests::PartialEncodedStateWitnessForward(
target_chunk_validators,
partial_witness,
),
));
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@shreyan-gupta it seems that we do not store the first partial witness received from the chunk producer and rebuild the state witness from the forwarded ones. While storing 1 partial witness faster won't make much of a difference, I was wandering if this was intended or if I missed something?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant