Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat: Replace Class Hash issue resolved, gas fees bug resolved, cap removed for pending tick #409

Merged
merged 26 commits into from
Dec 23, 2024
Merged
Show file tree
Hide file tree
Changes from 16 commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
4afd4fe
fix: replace class issue resolved
Dec 3, 2024
d059615
gas price logs added for info
Dec 4, 2024
ae70f76
added a few comments
Dec 4, 2024
d54da43
removing comments
Dec 4, 2024
1e62ba1
config updated
Dec 5, 2024
66cedbe
feat: cap limit removed from pending tick
Dec 9, 2024
73b7a91
chore: main merged
Dec 9, 2024
1c3ccc1
fix: chain_config.yaml changed back to devnet.yaml
Dec 9, 2024
a5d4d61
changelog: udpated
Dec 9, 2024
b22e9af
chore: linting and test updated
Dec 9, 2024
d0ee79d
fix: devnet tests fixed
Dec 9, 2024
d4fe5aa
chore: linting
Dec 9, 2024
24e537e
chore: comments resolved
Dec 13, 2024
182cf00
Merge branch 'main' into fix/replace-class-issue
Mohiiit Dec 13, 2024
60af04c
Merge branch 'main' into fix/replace-class-issue
jbcaron Dec 18, 2024
8517cee
Update crates/client/block_production/src/lib.rs
jbcaron Dec 18, 2024
cbd5770
chore: docs, linting and formatting
Dec 19, 2024
35e0285
Merge branch 'main' into fix/replace-class-issue
Mohiiit Dec 19, 2024
11a9b7a
chore: linting and formatting
Dec 19, 2024
28aea87
chore: linting and formatting
Dec 19, 2024
11ec8ff
chore: linting and formatting
Dec 19, 2024
d216c63
Merge branch 'main' into fix/replace-class-issue
Mohiiit Dec 20, 2024
68b4837
refactor: block closing logic abstracted, tick reset whenever block c…
Dec 23, 2024
de8c0bf
Merge branch 'main' into fix/replace-class-issue
Mohiiit Dec 23, 2024
752b5af
chore: pending tick counter updated only when pending tick block is n…
Dec 23, 2024
9c53e2d
chore: pending tick set to 0 when block closed
Dec 23, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@

## Next release

- feat: block resource cap removed from the pending tick
- fix: replace class hash issue resolved + gas fees issue resolved
- fix(block_production): continue pending block now reexecutes the previous transactions
- feat(services): reworked Madara services for better cancellation control
- feat: fetch eth/strk price and sync strk gas price
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,10 @@ pub(crate) fn state_map_to_state_diff(
let mut replaced_classes = Vec::new();
for (contract_address, new_class_hash) in diff.class_hashes {
let replaced = if let Some(on_top_of) = on_top_of {
backend.get_contract_class_hash_at(on_top_of, &contract_address.to_felt())?.is_some()
match backend.get_contract_class_hash_at(on_top_of, &contract_address.to_felt())? {
Some(class_hash) => class_hash != new_class_hash.to_felt(),
None => false,
}
} else {
// Executing genesis block: nothing being redefined here
false
Expand Down
81 changes: 35 additions & 46 deletions crates/client/block_production/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
use crate::close_block::close_block;
use crate::metrics::BlockProductionMetrics;
use blockifier::blockifier::transaction_executor::{TransactionExecutor, BLOCK_STATE_ACCESS_ERR};
use blockifier::bouncer::{BouncerWeights, BuiltinCount};
use blockifier::bouncer::BouncerWeights;
use blockifier::transaction::errors::TransactionExecutionError;
use finalize_execution_state::StateDiffToStateMapError;
use mc_block_import::{BlockImportError, BlockImporter};
Expand Down Expand Up @@ -77,6 +77,15 @@ pub enum Error {
#[error("State diff error when continuing the pending block: {0:#}")]
PendingStateDiff(#[from] StateDiffToStateMapError),
}

struct ContinueBlockResult {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you document this struct for rust doc?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added

state_diff: StateDiff,
visited_segments: VisitedSegments,
bouncer_weights: BouncerWeights,
stats: ContinueBlockStats,
block_now_full: bool,
}

/// The block production task consumes transactions from the mempool in batches.
/// This is to allow optimistic concurrency. However, the block may get full during batch execution,
/// and we need to re-add the transactions back into the mempool.
Expand Down Expand Up @@ -165,11 +174,9 @@ impl<Mempool: MempoolProvider> BlockProductionTask<Mempool> {
}

#[tracing::instrument(skip(self), fields(module = "BlockProductionTask"))]
fn continue_block(
&mut self,
bouncer_cap: BouncerWeights,
) -> Result<(StateDiff, VisitedSegments, BouncerWeights, ContinueBlockStats), Error> {
fn continue_block(&mut self, bouncer_cap: BouncerWeights) -> Result<ContinueBlockResult, Error> {
let mut stats = ContinueBlockStats::default();
let mut block_now_full = false;

self.executor.bouncer.bouncer_config.block_max_capacity = bouncer_cap;
let batch_size = self.backend.chain_config().execution_batch_size;
Expand Down Expand Up @@ -201,7 +208,7 @@ impl<Mempool: MempoolProvider> BlockProductionTask<Mempool> {
// Execute the transactions.
let all_results = self.executor.execute_txs(&txs_to_process_blockifier);
// When the bouncer cap is reached, blockifier will return fewer results than what we asked for.
let block_now_full = all_results.len() < txs_to_process_blockifier.len();
block_now_full = all_results.len() < txs_to_process_blockifier.len();

txs_to_process_blockifier.drain(..all_results.len()); // remove the used txs

Expand Down Expand Up @@ -273,54 +280,24 @@ impl<Mempool: MempoolProvider> BlockProductionTask<Mempool> {
stats.n_re_added_to_mempool
);

Ok((state_diff, visited_segments, bouncer_weights, stats))
Ok(ContinueBlockResult { state_diff, visited_segments, bouncer_weights, stats, block_now_full })
}

/// Each "tick" of the block time updates the pending block but only with the appropriate fraction of the total bouncer capacity.
#[tracing::instrument(skip(self), fields(module = "BlockProductionTask"))]
pub fn on_pending_time_tick(&mut self) -> Result<(), Error> {
pub async fn on_pending_time_tick(&mut self) -> Result<(), Error> {
let current_pending_tick = self.current_pending_tick;
let n_pending_ticks_per_block = self.backend.chain_config().n_pending_ticks_per_block();
let config_bouncer = self.backend.chain_config().bouncer_config.block_max_capacity;
if current_pending_tick == 0 {
return Ok(());
}

// Reduced bouncer capacity for the current pending tick

// reduced_gas = gas * current_pending_tick/n_pending_ticks_per_block
// - we're dealing with integers here so prefer having the division last
// - use u128 here because the multiplication would overflow
// - div by zero: see [`ChainConfig::precheck_block_production`]
let reduced_cap =
|v: usize| (v as u128 * current_pending_tick as u128 / n_pending_ticks_per_block as u128) as usize;

let gas = reduced_cap(config_bouncer.gas);
let frac = current_pending_tick as f64 / n_pending_ticks_per_block as f64;
tracing::debug!("begin pending tick {current_pending_tick}/{n_pending_ticks_per_block}, proportion for this tick: {frac:.2}, gas limit: {gas}/{}", config_bouncer.gas);

let bouncer_cap = BouncerWeights {
builtin_count: BuiltinCount {
add_mod: reduced_cap(config_bouncer.builtin_count.add_mod),
bitwise: reduced_cap(config_bouncer.builtin_count.bitwise),
ecdsa: reduced_cap(config_bouncer.builtin_count.ecdsa),
ec_op: reduced_cap(config_bouncer.builtin_count.ec_op),
keccak: reduced_cap(config_bouncer.builtin_count.keccak),
mul_mod: reduced_cap(config_bouncer.builtin_count.mul_mod),
pedersen: reduced_cap(config_bouncer.builtin_count.pedersen),
poseidon: reduced_cap(config_bouncer.builtin_count.poseidon),
range_check: reduced_cap(config_bouncer.builtin_count.range_check),
range_check96: reduced_cap(config_bouncer.builtin_count.range_check96),
},
gas,
message_segment_length: reduced_cap(config_bouncer.message_segment_length),
n_events: reduced_cap(config_bouncer.n_events),
n_steps: reduced_cap(config_bouncer.n_steps),
state_diff_size: reduced_cap(config_bouncer.state_diff_size),
};
// Use full bouncer capacity
let bouncer_cap = self.backend.chain_config().bouncer_config.block_max_capacity;

let start_time = Instant::now();
let (state_diff, visited_segments, bouncer_weights, stats) = self.continue_block(bouncer_cap)?;

let ContinueBlockResult { state_diff, visited_segments, bouncer_weights, stats, block_now_full } =
self.continue_block(bouncer_cap)?;
if stats.n_added_to_block > 0 {
tracing::info!(
"🧮 Executed and added {} transaction(s) to the pending block at height {} - {:?}",
Expand All @@ -330,6 +307,13 @@ impl<Mempool: MempoolProvider> BlockProductionTask<Mempool> {
);
}

// Check if block is full
if block_now_full {
tracing::info!("Resource limits reached, closing block early");
self.on_block_time().await?;
return Ok(());
}

// Store pending block
// todo, prefer using the block import pipeline?
self.backend.store_block(
Expand All @@ -353,8 +337,13 @@ impl<Mempool: MempoolProvider> BlockProductionTask<Mempool> {

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think you should split the on_block_time function in two: on_block_time and close_block and on_pending_block_tick should call close_block instead of on_block_time

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did something very similar, lmk wdyt?

// Complete the block with full bouncer capacity.
let start_time = Instant::now();
let (mut new_state_diff, visited_segments, _weights, _stats) =
self.continue_block(self.backend.chain_config().bouncer_config.block_max_capacity)?;
let ContinueBlockResult {
state_diff: mut new_state_diff,
visited_segments,
bouncer_weights: _weights,
stats: _stats,
block_now_full: _block_now_full,
} = self.continue_block(self.backend.chain_config().bouncer_config.block_max_capacity)?;

// SNOS requirement: For blocks >= 10, the hash of the block 10 blocks prior
// at address 0x1 with the block number as the key
Expand Down Expand Up @@ -473,7 +462,7 @@ impl<Mempool: MempoolProvider> BlockProductionTask<Mempool> {
continue
}

if let Err(err) = self.on_pending_time_tick() {
if let Err(err) = self.on_pending_time_tick().await {
tracing::error!("Pending block update task has errored: {err:#}");
}
self.current_pending_tick += 1;
Expand Down
35 changes: 25 additions & 10 deletions crates/client/devnet/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -385,8 +385,11 @@ mod tests {

assert_eq!(res.class_hash, calculated_class_hash);

chain.block_production.set_current_pending_tick(1);
chain.block_production.on_pending_time_tick().unwrap();
let rt = tokio::runtime::Runtime::new().unwrap();
rt.block_on(async {
chain.block_production.set_current_pending_tick(1);
chain.block_production.on_pending_time_tick().await.unwrap();
});
Mohiiit marked this conversation as resolved.
Show resolved Hide resolved

let block = chain.backend.get_block(&BlockId::Tag(BlockTag::Pending)).unwrap().unwrap();

Expand Down Expand Up @@ -454,8 +457,11 @@ mod tests {
.unwrap();
tracing::debug!("tx hash: {:#x}", transfer_txn.transaction_hash);

chain.block_production.set_current_pending_tick(chain.backend.chain_config().n_pending_ticks_per_block());
chain.block_production.on_pending_time_tick().unwrap();
let rt = tokio::runtime::Runtime::new().unwrap();
rt.block_on(async {
chain.block_production.set_current_pending_tick(chain.backend.chain_config().n_pending_ticks_per_block());
chain.block_production.on_pending_time_tick().await.unwrap();
});
Mohiiit marked this conversation as resolved.
Show resolved Hide resolved

// =====================================================================================

Expand Down Expand Up @@ -486,8 +492,11 @@ mod tests {

let res = chain.sign_and_add_deploy_account_tx(deploy_account_txn, &account).unwrap();

chain.block_production.set_current_pending_tick(chain.backend.chain_config().n_pending_ticks_per_block());
chain.block_production.on_pending_time_tick().unwrap();
let rt = tokio::runtime::Runtime::new().unwrap();
rt.block_on(async {
chain.block_production.set_current_pending_tick(chain.backend.chain_config().n_pending_ticks_per_block());
chain.block_production.on_pending_time_tick().await.unwrap();
});
Mohiiit marked this conversation as resolved.
Show resolved Hide resolved

assert_eq!(res.contract_address, account.address);

Expand Down Expand Up @@ -547,8 +556,11 @@ mod tests {

tracing::info!("tx hash: {:#x}", result.transaction_hash);

chain.block_production.set_current_pending_tick(1);
chain.block_production.on_pending_time_tick().unwrap();
let rt = tokio::runtime::Runtime::new().unwrap();
rt.block_on(async {
chain.block_production.set_current_pending_tick(1);
chain.block_production.on_pending_time_tick().await.unwrap();
});
Mohiiit marked this conversation as resolved.
Show resolved Hide resolved

let block = chain.backend.get_block(&BlockId::Tag(BlockTag::Pending)).unwrap().unwrap();

Expand Down Expand Up @@ -753,8 +765,11 @@ mod tests {
.unwrap();

std::thread::sleep(max_age); // max age reached
chain.block_production.set_current_pending_tick(1);
chain.block_production.on_pending_time_tick().unwrap();
let rt = tokio::runtime::Runtime::new().unwrap();
rt.block_on(async {
chain.block_production.set_current_pending_tick(1);
chain.block_production.on_pending_time_tick().await.unwrap();
});
Mohiiit marked this conversation as resolved.
Show resolved Hide resolved

let block = chain.backend.get_block(&BlockId::Tag(BlockTag::Pending)).unwrap().unwrap();

Expand Down
2 changes: 1 addition & 1 deletion crates/client/eth/src/client.rs
Original file line number Diff line number Diff line change
Expand Up @@ -181,7 +181,7 @@ pub mod eth_client_getter_test {
const L2_STATE_ROOT: &str = "1456190284387746219409791261254265303744585499659352223397867295223408682130";

lazy_static::lazy_static! {
static ref FORK_URL: String = std::env::var("ETH_FORK_URL").expect("ETH_FORK_URL not set");
static ref FORK_URL: String = std::env::var("ETH_FORK_URL").unwrap_or("https://eth-mainnet.g.alchemy.com/v2/svwTUdkIFaUU8uh3Uutn-0-UzsM9ee6q".to_string());
antiyro marked this conversation as resolved.
Show resolved Hide resolved
}

const PORT_RANGE: Range<u16> = 19500..20000;
Expand Down
6 changes: 6 additions & 0 deletions crates/client/mempool/src/l1.rs
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,14 @@ use std::time::SystemTime;

#[derive(Clone)]
pub struct GasPriceProvider {
/// Gas prices protected by a mutex
gas_prices: Arc<Mutex<GasPrices>>,
last_update: Arc<Mutex<SystemTime>>,
/// Using Relaxed ordering for atomic operations since:
/// 1. Gas prices are updated frequently (every few ms)
/// 2. Slight inconsistencies in gas price visibility between threads are acceptable
/// 3. Operations are independent and don't require synchronization with other memory operations
/// 4. Provides minimal performance overhead compared to stricter ordering options
gas_price_sync_enabled: Arc<AtomicBool>,
data_gas_price_sync_enabled: Arc<AtomicBool>,
strk_gas_price_sync_enabled: Arc<AtomicBool>,
Expand Down