-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enforce max_block_size #1382
Enforce max_block_size #1382
Conversation
As this is a distinct / new feature for of our block building logic we should add a unittest that fails if we don't enforce the max block size. |
382e4e6
to
91a1589
Compare
@tbro anything blocking more progress here? |
Yes I need to fix this: EspressoSystems/HotShot#3068. I've taken a few missteps on that, but making progress. |
5d6ef56
to
246e4cb
Compare
37483de
to
b9d4f48
Compare
sequencer/src/block/payload.rs
Outdated
for tx in txs.into_iter() { | ||
block_size += tx.payload().len() as u64; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not quite right as the payload contains other metadata, like a transaction table for each namespace, that could cause it to exceed the max block size even if the sum of transaction payloads doesn't.
Not sure of the best solution, maybe @ggutoski has some ideas. One idea could be to add a len()
function to NamespaceInfo
, which would return something like TxTableEntry::byte_len() + tx_table.len() + tx_bodies.len()
(the first term is for the one word used to encode the length of the tx table). Then we would basically loop until namespaces.values().map(NamespaceInfo::len).sum() > chain_config.max_block_size()
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm currently rewriting everything in block
. I can definitely accommodate this. For this PR I suggest you do as little as possible to get it working.
It should be possible to keep a running count of the block byte length for each new transaction that includes tx table sizes, then just stop when we hit the limit.
(Note that a single large tx could cause a lot of wasted block space if it happens to be the one that triggers break
. We might need to do something more intelligent in the future.)
Happy to hop on a call to nail this down.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that I think of it, maybe this is easy. Just do
block_size += tx.payload.len() + [size of a tx table entry];
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
like so?
tx.payload().len() + size_of::<TxTableEntry>()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, actually. (Sorry I could have clarified that for you!)
You still need to account for the tx table header in each namespace. That's an additional size_of::<TxTableEntry>()
bytes per namespace. So I guess at the end you could do something like
block_size += namespaces.len() * size_of::<TxTableEntry>()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's a tiny amount of data, but it could in principle push you over the limit. You could account for it on-the-fly by doing block_size += size_of::<TxTableEntry>()
each time you encounter a new namespace. Unfortunately that's wrapped up in update_namespace_with_tx
, so you might need to hack that method to get what you want. I suggest you don't worry about ugly code for this PR though because it will all be replaced soon.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i like it!
sequencer/src/block/payload.rs
Outdated
|
||
// block_size is updated when we encounter new namespaces | ||
Payload::<TableWord>::update_namespace_with_tx(&mut namespaces, tx, &mut block_size); | ||
|
||
if block_size >= chain_config.max_block_size() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we break here we have already added the new transaction that pushes us over the limit. I think somehow we need to account for all the new size, including from update_namespace_with_tx
before pushing the new tx
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A yes. hummm its a complicated bit of spaghetti to try and reason about.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only thing I can think of is just to check the hashmap and see if the namespace of the current tx is present or not. Its redundant b/c update_namespace_with_tx
does that check again, but if @ggutoski is going to give all this an overhaul anyway maybe it doesn't matter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed here 🤞 b849d44
Omit transactions that exceed max_block_size from payload. Use mocked `max_block_size` for now. * add test `enforce_max_block_size` This hasn't really be tested yet, but it is how I plan on testing this. * remove hotshot-web-server
03299ab
to
b235c42
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, block size handling seems correct now
Omit transactions that exceed max_block_size from payload. Use mocked
max_block_size
for now.Depends on: EspressoSystems/HotShot#3096
Related to: EspressoSystems/hotshot-builder-core#122