Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SREP-502 update nitro to 3.1.0 #68

Merged
merged 1 commit into from
Jul 18, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions charts/das/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,6 @@ maintainers:

type: application

version: 0.5.3
version: 0.5.4

appVersion: "v3.0.3-3ecd01e"
appVersion: "v3.1.0-7d1d84c"
8 changes: 6 additions & 2 deletions charts/das/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -289,14 +289,17 @@ Option | Description | Default
`data-availability.local-db-storage.base-table-size` | int BadgerDB option: sets the maximum size in bytes for LSM table or file in the base level | `2097152`
`data-availability.local-db-storage.data-dir` | string directory in which to store the database | None
`data-availability.local-db-storage.discard-after-timeout` | discard data after its expiry timeout | None
`data-availability.local-db-storage.enable` | enable storage/retrieval of sequencer batch data from a database on the local filesystem | None
`data-availability.local-db-storage.enable` | !!!DEPRECATED, USE local-file-storage!!! enable storage/retrieval of sequencer batch data from a database on the local filesystem | None
`data-availability.local-db-storage.num-compactors` | int BadgerDB option: Sets the number of compaction workers to run concurrently | `4`
`data-availability.local-db-storage.num-level-zero-tables` | int BadgerDB option: sets the maximum number of Level 0 tables before compaction starts | `5`
`data-availability.local-db-storage.num-level-zero-tables-stall` | int BadgerDB option: sets the number of Level 0 tables that once reached causes the DB to stall until compaction succeeds | `15`
`data-availability.local-db-storage.num-memtables` | int BadgerDB option: sets the maximum number of tables to keep in memory before stalling | `5`
`data-availability.local-db-storage.value-log-file-size` | int BadgerDB option: sets the maximum size of a single log file | `1073741823`
`data-availability.local-file-storage.data-dir` | string local data directory | None
`data-availability.local-file-storage.enable` | enable storage/retrieval of sequencer batch data from a directory of files, one per batch | None
`data-availability.local-file-storage.enable-expiry` | enable expiry of batches | None
`data-availability.local-file-storage.max-retention` | duration store requests with expiry times farther in the future than max-retention will be rejected | `504h0m0s`
`data-availability.migrate-local-db-to-file-storage` | daserver will migrate all data on startup from local-db-storage to local-file-storage, then mark local-db-storage as unusable | None
`data-availability.panic-on-error` | whether the Data Availability Service should fail immediately on errors (not recommended) | None
`data-availability.parent-chain-connection-attempts` | int parent chain RPC connection attempts (spaced out at least 1 second per attempt, 0 to retry infinitely), only used in standalone daserver; when running as part of a node that node's parent chain configuration is used | `15`
`data-availability.parent-chain-node-url` | string URL for parent chain node, only used in standalone daserver; when running as part of a node that node's L1 configuration is used | None
Expand All @@ -317,8 +320,9 @@ Option | Description | Default
`data-availability.rest-aggregator.sync-to-storage.eager-lower-bound-block` | uint when eagerly syncing, start indexing forward from this L1 block. Only used if there is no sync state | None
`data-availability.rest-aggregator.sync-to-storage.ignore-write-errors` | log only on failures to write when syncing; otherwise treat it as an error | `true`
`data-availability.rest-aggregator.sync-to-storage.parent-chain-blocks-per-read` | uint when eagerly syncing, max l1 blocks to read per poll | `100`
`data-availability.rest-aggregator.sync-to-storage.retention-period` | duration period to retain synced data (defaults to forever) | `2562047h47m16.854775807s`
`data-availability.rest-aggregator.sync-to-storage.retention-period` | duration period to request storage to retain synced data | `504h0m0s`
`data-availability.rest-aggregator.sync-to-storage.state-dir` | string directory to store the sync state in, ie the block number currently synced up to, so that we don't sync from scratch each time | None
`data-availability.rest-aggregator.sync-to-storage.sync-expired-data` | sync even data that is expired; needed for mirror configuration | `true`
`data-availability.rest-aggregator.urls` | strings list of URLs including 'http://' or 'https://' prefixes and port numbers to REST DAS endpoints; additive with the online-url-list option | None
`data-availability.rest-aggregator.wait-before-try-next` | duration time to wait until trying the next set of REST endpoints while waiting for a response; the next set of REST endpoints is determined by the strategy selected | `2s`
`data-availability.s3-storage.access-key` | string S3 access key | None
Expand Down
4 changes: 2 additions & 2 deletions charts/nitro/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,6 @@ maintainers:

type: application

version: 0.6.4
version: 0.6.5

appVersion: "v3.0.3-3ecd01e"
appVersion: "v3.1.0-7d1d84c"
19 changes: 8 additions & 11 deletions charts/nitro/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -377,6 +377,7 @@ Option | Description | Default
`init.reset-to-message` | int forces a reset to an old message height. Also set max-reorg-resequence-depth=0 to force re-reading messages | `-1`
`init.then-quit` | quit after init is done | None
`init.url` | string url to download initialization data - will poll if download fails | None
`init.validate-checksum` | if true: validate the checksum after downloading the snapshot | `true`
`ipc.path` | string Requested location to place the IPC endpoint. An empty path disables IPC. | None
`log-level` | string log level, valid values are CRIT, ERROR, WARN, INFO, DEBUG, TRACE | `INFO`
`log-type` | string log type (plaintext or json) | `plaintext`
Expand Down Expand Up @@ -445,6 +446,7 @@ Option | Description | Default
`node.batch-poster.redis-lock.my-id` | string this node's id prefix when acquiring the lock (optional) | None
`node.batch-poster.redis-lock.refresh-duration` | duration how long between consecutive calls to redis | `10s`
`node.batch-poster.redis-url` | string if non-empty, the Redis URL to store queued transactions in | None
`node.batch-poster.reorg-resistance-margin` | duration do not post batch if its within this duration from layer 1 minimum bounds. Requires l1-block-bound option not be set to "ignore" | `10m0s`
`node.batch-poster.use-access-lists` | post batches with access lists to reduce gas usage (disabled for L3s) | `true`
`node.batch-poster.wait-for-max-delay` | wait for the max batch delay, even if the batch is full | None
`node.block-validator.current-module-root` | string current wasm module root ('current' read from chain, 'latest' from machines/latest dir, or provide hash) | `current`
Expand Down Expand Up @@ -495,12 +497,13 @@ Option | Description | Default
`node.data-availability.rest-aggregator.sync-to-storage.eager-lower-bound-block` | uint when eagerly syncing, start indexing forward from this L1 block. Only used if there is no sync state | None
`node.data-availability.rest-aggregator.sync-to-storage.ignore-write-errors` | log only on failures to write when syncing; otherwise treat it as an error | `true`
`node.data-availability.rest-aggregator.sync-to-storage.parent-chain-blocks-per-read` | uint when eagerly syncing, max l1 blocks to read per poll | `100`
`node.data-availability.rest-aggregator.sync-to-storage.retention-period` | duration period to retain synced data (defaults to forever) | `2562047h47m16.854775807s`
`node.data-availability.rest-aggregator.sync-to-storage.retention-period` | duration period to request storage to retain synced data | `504h0m0s`
`node.data-availability.rest-aggregator.sync-to-storage.state-dir` | string directory to store the sync state in, ie the block number currently synced up to, so that we don't sync from scratch each time | None
`node.data-availability.rest-aggregator.sync-to-storage.sync-expired-data` | sync even data that is expired; needed for mirror configuration | `true`
`node.data-availability.rest-aggregator.urls` | strings list of URLs including 'http://' or 'https://' prefixes and port numbers to REST DAS endpoints; additive with the online-url-list option | None
`node.data-availability.rest-aggregator.wait-before-try-next` | duration time to wait until trying the next set of REST endpoints while waiting for a response; the next set of REST endpoints is determined by the strategy selected | `2s`
`node.data-availability.rpc-aggregator.assumed-honest` | int Number of assumed honest backends (H). If there are N backends, K=N+1-H valid responses are required to consider an Store request to be successful. | None
`node.data-availability.rpc-aggregator.backends` | string JSON RPC backend configuration | None
`node.data-availability.rpc-aggregator.backends` | backendConfigList JSON RPC backend configuration. This can be specified on the command line as a JSON array, eg: [{"url": "...", "pubkey": "..."},...], or as a JSON array in the config file. | `null`
`node.data-availability.rpc-aggregator.enable` | enable storage of sequencer batch data from a list of RPC endpoints; this should only be used by the batch poster and not in combination with other DAS storage types | None
`node.data-availability.rpc-aggregator.max-store-chunk-body-size` | int maximum HTTP POST body size to use for individual batch chunks, including JSON RPC overhead and an estimated overhead of 512B of headers | `524288`
`node.data-availability.sequencer-inbox-address` | string parent chain address of SequencerInbox contract | None
Expand Down Expand Up @@ -635,6 +638,7 @@ Option | Description | Default
`node.staker.enable` | enable validator | `true`
`node.staker.extra-gas` | uint use this much more gas than estimation says is necessary to post transactions | `50000`
`node.staker.gas-refunder-address` | string The gas refunder contract address (optional) | None
`node.staker.log-query-batch-size` | uint range ro query from eth_getLogs | None
`node.staker.make-assertion-interval` | duration if configured with the makeNodes strategy, how often to create new assertions (bypassed in case of a dispute) | `1h0m0s`
`node.staker.only-create-wallet-contract` | only create smart wallet contract and exit | None
`node.staker.parent-chain-wallet.account` | string account to use | `is first account in keystore`
Expand All @@ -653,14 +657,6 @@ Option | Description | Default
`node.transaction-streamer.execute-message-loop-delay` | duration delay when polling calls to execute messages | `100ms`
`node.transaction-streamer.max-broadcaster-queue-size` | int maximum cache of pending broadcaster messages | `50000`
`node.transaction-streamer.max-reorg-resequence-depth` | int maximum number of messages to attempt to resequence on reorg (0 = never resequence, -1 = always resequence) | `1024`
`p2p.bootnodes` | strings P2P bootnodes | None
`p2p.bootnodes-v5` | strings P2P bootnodes v5 | None
`p2p.discovery-v4` | P2P discovery v4 | None
`p2p.discovery-v5` | P2P discovery v5 | None
`p2p.listen-addr` | string P2P listen address | None
`p2p.max-peers` | int P2P max peers | `50`
`p2p.no-dial` | P2P no dial | `true`
`p2p.no-discovery` | P2P no discovery | `true`
`parent-chain.blob-client.authorization` | string Value to send with the HTTP Authorization: header for Beacon REST requests, must include both scheme and scheme parameters | None
`parent-chain.blob-client.beacon-url` | string Beacon Chain RPC URL to use for fetching blobs (normally on port 3500) | None
`parent-chain.blob-client.blob-directory` | string Full path of the directory to save fetched blobs | None
Expand All @@ -677,7 +673,7 @@ Option | Description | Default
`parent-chain.id` | uint if set other than 0, will be used to validate database and L1 connection | None
`persistent.ancient` | string directory of ancient where the chain freezer can be opened | None
`persistent.chain` | string directory to store chain state | None
`persistent.db-engine` | string backing database implementation to use ('leveldb' or 'pebble') | `leveldb`
`persistent.db-engine` | string backing database implementation to use. If set to empty string the database type will be autodetected and if no pre-existing database is found it will default to creating new pebble database ('leveldb', 'pebble' or '' = auto-detect) | None
`persistent.global-config` | string directory to store global config | `.arbitrum`
`persistent.handles` | int number of file descriptor handles to use for the database | `512`
`persistent.log-dir` | string directory to store log file | None
Expand Down Expand Up @@ -717,6 +713,7 @@ Option | Description | Default
`validation.arbitrator.redis-validation-server-config.consumer-config.keepalive-timeout` | duration timeout after which consumer is considered inactive if heartbeat wasn't performed | `5m0s`
`validation.arbitrator.redis-validation-server-config.consumer-config.response-entry-timeout` | duration timeout for response entry | `1h0m0s`
`validation.arbitrator.redis-validation-server-config.module-roots` | strings Supported module root hashes | None
`validation.arbitrator.redis-validation-server-config.redis-url` | string url of redis server | None
`validation.arbitrator.redis-validation-server-config.stream-timeout` | duration Timeout on polling for existence of redis streams | `10m0s`
`validation.arbitrator.workers` | int number of concurrent validation threads | None
`validation.jit.cranelift` | use Cranelift instead of LLVM when validating blocks using the jit-accelerated block validator | `true`
Expand Down
4 changes: 2 additions & 2 deletions charts/relay/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,6 @@ maintainers:

type: application

version: 0.5.2
version: 0.5.3

appVersion: "v3.0.3-3ecd01e"
appVersion: "v3.1.0-7d1d84c"
Loading