Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add blocks in batches #19003

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 8 additions & 17 deletions chia/simulator/add_blocks_in_batches.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@
from chia.types.full_block import FullBlock
from chia.types.peer_info import PeerInfo
from chia.types.validation_state import ValidationState
from chia.util.batches import to_batches
from chia.util.ints import uint32


Expand All @@ -35,20 +34,12 @@ async def add_blocks_in_batches(

vs = ValidationState(ssi, diff, None)

for block_batch in to_batches(blocks, 64):
b = block_batch.entries[0]
if (b.height % 128) == 0:
print(f"main chain: {b.height:4} weight: {b.weight}")
# vs is updated by the call to add_block_batch()
success, state_change_summary = await full_node.add_block_batch(
block_batch.entries, PeerInfo("0.0.0.0", 0), fork_info, vs
)
assert success is True
if state_change_summary is not None:
peak_fb: Optional[FullBlock] = await full_node.blockchain.get_full_peak()
assert peak_fb is not None
ppp_result: PeakPostProcessingResult = await full_node.peak_post_processing(
peak_fb, state_change_summary, None
)
await full_node.peak_post_processing_2(peak_fb, None, state_change_summary, ppp_result)
# vs is updated by the call to add_block_batch()
success, state_change_summary = await full_node.add_block_batch(blocks, PeerInfo("0.0.0.0", 0), fork_info, vs)
Copy link
Contributor

@almogdepaz almogdepaz Dec 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we call add_blocks_in_batches sometimes with huge lists of of blocks since we expect it to batch them into validation, this seems to diverge more then what we do in the full node

also this doesn't add blocks in batches anymore, it just adds all the blocks so the name is misleading

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the number of blocks to be validated concurrently is limited by the size of the ThreadPoolExecutor and blocks are streamed into its job queue. So I don't think the size of the list is a problem, we'll do our best to maximize the available threads at all times, without stalling the pipeline every 64 blocks.

yes, I agree that this makes the name misleading. Do you have a suggestion for a better name? I think the name add_blocks() may be a bit too generic

assert success is True
if state_change_summary is not None:
peak_fb: Optional[FullBlock] = await full_node.blockchain.get_full_peak()
assert peak_fb is not None
ppp_result: PeakPostProcessingResult = await full_node.peak_post_processing(peak_fb, state_change_summary, None)
await full_node.peak_post_processing_2(peak_fb, None, state_change_summary, ppp_result)
await full_node._finish_sync(uint32(max(0, fork_height)))
Loading