diff --git a/charts/das/Chart.yaml b/charts/das/Chart.yaml index ef7f1c2..857d347 100644 --- a/charts/das/Chart.yaml +++ b/charts/das/Chart.yaml @@ -7,6 +7,6 @@ maintainers: type: application -version: 0.5.8 +version: 0.5.9 -appVersion: "v3.1.2-309340a" +appVersion: "v3.2.0-f847be0" diff --git a/charts/das/README.md b/charts/das/README.md index acc9228..5b380fb 100644 --- a/charts/das/README.md +++ b/charts/das/README.md @@ -270,92 +270,92 @@ The following table lists the exhaustive configurable parameters that can be app Option | Description | Default --- | --- | --- `conf.dump` | print out currently active configuration file | None -`conf.env-prefix` | string environment variables with given prefix will be loaded as configuration values | None -`conf.file` | strings name of configuration file | None -`conf.reload-interval` | duration how often to reload configuration (0=disable periodic reloading) | None -`conf.s3.access-key` | string S3 access key | None -`conf.s3.bucket` | string S3 bucket | None -`conf.s3.object-key` | string S3 object key | None -`conf.s3.region` | string S3 region | None -`conf.s3.secret-key` | string S3 secret key | None -`conf.string` | string configuration as JSON string | None +`conf.env-prefix` | string environment variables with given prefix will be loaded as configuration values | None +`conf.file` | strings name of configuration file | None +`conf.reload-interval` | duration how often to reload configuration (0=disable periodic reloading) | None +`conf.s3.access-key` | string S3 access key | None +`conf.s3.bucket` | string S3 bucket | None +`conf.s3.object-key` | string S3 object key | None +`conf.s3.region` | string S3 region | None +`conf.s3.secret-key` | string S3 secret key | None +`conf.string` | string configuration as JSON string | None `data-availability.disable-signature-checking` | disables signature checking on Data Availability Store requests (DANGEROUS, FOR TESTING ONLY) | None `data-availability.enable` | enable Anytrust Data Availability mode | `true` -`data-availability.extra-signature-checking-public-key` | string public key to use to validate Data Availability Store requests in addition to the Sequencer's public key determined using sequencer-inbox-address, can be a file or the hex-encoded public key beginning with 0x; useful for testing | None -`data-availability.key.key-dir` | string the directory to read the bls keypair ('das_bls.pub' and 'das_bls') from; if using any of the DAS storage types exactly one of key-dir or priv-key must be specified | None -`data-availability.key.priv-key` | string the base64 BLS private key to use for signing DAS certificates; if using any of the DAS storage types exactly one of key-dir or priv-key must be specified | None -`data-availability.local-cache.capacity` | int Maximum number of entries (up to 64KB each) to store in the cache. | `20000` +`data-availability.extra-signature-checking-public-key` | string public key to use to validate Data Availability Store requests in addition to the Sequencer's public key determined using sequencer-inbox-address, can be a file or the hex-encoded public key beginning with 0x; useful for testing | None +`data-availability.key.key-dir` | string the directory to read the bls keypair ('das_bls.pub' and 'das_bls') from; if using any of the DAS storage types exactly one of key-dir or priv-key must be specified | None +`data-availability.key.priv-key` | string the base64 BLS private key to use for signing DAS certificates; if using any of the DAS storage types exactly one of key-dir or priv-key must be specified | None +`data-availability.local-cache.capacity` | int Maximum number of entries (up to 64KB each) to store in the cache. | `20000` `data-availability.local-cache.enable` | Enable local in-memory caching of sequencer batch data | None -`data-availability.local-db-storage.base-table-size` | int BadgerDB option: sets the maximum size in bytes for LSM table or file in the base level | `2097152` -`data-availability.local-db-storage.data-dir` | string directory in which to store the database | None +`data-availability.local-db-storage.base-table-size` | int BadgerDB option: sets the maximum size in bytes for LSM table or file in the base level | `2097152` +`data-availability.local-db-storage.data-dir` | string directory in which to store the database | None `data-availability.local-db-storage.discard-after-timeout` | discard data after its expiry timeout | None `data-availability.local-db-storage.enable` | !!!DEPRECATED, USE local-file-storage!!! enable storage/retrieval of sequencer batch data from a database on the local filesystem | None -`data-availability.local-db-storage.num-compactors` | int BadgerDB option: Sets the number of compaction workers to run concurrently | `4` -`data-availability.local-db-storage.num-level-zero-tables` | int BadgerDB option: sets the maximum number of Level 0 tables before compaction starts | `5` -`data-availability.local-db-storage.num-level-zero-tables-stall` | int BadgerDB option: sets the number of Level 0 tables that once reached causes the DB to stall until compaction succeeds | `15` -`data-availability.local-db-storage.num-memtables` | int BadgerDB option: sets the maximum number of tables to keep in memory before stalling | `5` -`data-availability.local-db-storage.value-log-file-size` | int BadgerDB option: sets the maximum size of a single log file | `1073741823` -`data-availability.local-file-storage.data-dir` | string local data directory | None +`data-availability.local-db-storage.num-compactors` | int BadgerDB option: Sets the number of compaction workers to run concurrently | `4` +`data-availability.local-db-storage.num-level-zero-tables` | int BadgerDB option: sets the maximum number of Level 0 tables before compaction starts | `5` +`data-availability.local-db-storage.num-level-zero-tables-stall` | int BadgerDB option: sets the number of Level 0 tables that once reached causes the DB to stall until compaction succeeds | `15` +`data-availability.local-db-storage.num-memtables` | int BadgerDB option: sets the maximum number of tables to keep in memory before stalling | `5` +`data-availability.local-db-storage.value-log-file-size` | int BadgerDB option: sets the maximum size of a single log file | `1073741823` +`data-availability.local-file-storage.data-dir` | string local data directory | None `data-availability.local-file-storage.enable` | enable storage/retrieval of sequencer batch data from a directory of files, one per batch | None `data-availability.local-file-storage.enable-expiry` | enable expiry of batches | None -`data-availability.local-file-storage.max-retention` | duration store requests with expiry times farther in the future than max-retention will be rejected | `504h0m0s` +`data-availability.local-file-storage.max-retention` | duration store requests with expiry times farther in the future than max-retention will be rejected | `504h0m0s` `data-availability.migrate-local-db-to-file-storage` | daserver will migrate all data on startup from local-db-storage to local-file-storage, then mark local-db-storage as unusable | None `data-availability.panic-on-error` | whether the Data Availability Service should fail immediately on errors (not recommended) | None -`data-availability.parent-chain-connection-attempts` | int parent chain RPC connection attempts (spaced out at least 1 second per attempt, 0 to retry infinitely), only used in standalone daserver; when running as part of a node that node's parent chain configuration is used | `15` -`data-availability.parent-chain-node-url` | string URL for parent chain node, only used in standalone daserver; when running as part of a node that node's L1 configuration is used | None +`data-availability.parent-chain-connection-attempts` | int parent chain RPC connection attempts (spaced out at least 1 second per attempt, 0 to retry infinitely), only used in standalone daserver; when running as part of a node that node's parent chain configuration is used | `15` +`data-availability.parent-chain-node-url` | string URL for parent chain node, only used in standalone daserver; when running as part of a node that node's L1 configuration is used | None `data-availability.redis-cache.enable` | enable Redis caching of sequencer batch data | None -`data-availability.redis-cache.expiration` | duration Redis expiration | `1h0m0s` -`data-availability.redis-cache.key-config` | string Redis key config | None -`data-availability.redis-cache.url` | string Redis url | None +`data-availability.redis-cache.expiration` | duration Redis expiration | `1h0m0s` +`data-availability.redis-cache.key-config` | string Redis key config | None +`data-availability.redis-cache.url` | string Redis url | None `data-availability.rest-aggregator.enable` | enable retrieval of sequencer batch data from a list of remote REST endpoints; if other DAS storage types are enabled, this mode is used as a fallback | None -`data-availability.rest-aggregator.max-per-endpoint-stats` | int number of stats entries (latency and success rate) to keep for each REST endpoint; controls whether strategy is faster or slower to respond to changing conditions | `20` -`data-availability.rest-aggregator.online-url-list` | string a URL to a list of URLs of REST das endpoints that is checked at startup; additive with the url option | None -`data-availability.rest-aggregator.online-url-list-fetch-interval` | duration time interval to periodically fetch url list from online-url-list | `1h0m0s` -`data-availability.rest-aggregator.simple-explore-exploit-strategy.exploit-iterations` | int number of consecutive GetByHash calls to the aggregator where each call will cause it to select from REST endpoints in order of best latency and success rate, before switching to explore mode | `1000` -`data-availability.rest-aggregator.simple-explore-exploit-strategy.explore-iterations` | int number of consecutive GetByHash calls to the aggregator where each call will cause it to randomly select from REST endpoints until one returns successfully, before switching to exploit mode | `20` -`data-availability.rest-aggregator.strategy` | string strategy to use to determine order and parallelism of calling REST endpoint URLs; valid options are 'simple-explore-exploit' | `simple-explore-exploit` -`data-availability.rest-aggregator.strategy-update-interval` | duration how frequently to update the strategy with endpoint latency and error rate data | `10s` -`data-availability.rest-aggregator.sync-to-storage.delay-on-error` | duration time to wait if encountered an error before retrying | `1s` +`data-availability.rest-aggregator.max-per-endpoint-stats` | int number of stats entries (latency and success rate) to keep for each REST endpoint; controls whether strategy is faster or slower to respond to changing conditions | `20` +`data-availability.rest-aggregator.online-url-list` | string a URL to a list of URLs of REST das endpoints that is checked at startup; additive with the url option | None +`data-availability.rest-aggregator.online-url-list-fetch-interval` | duration time interval to periodically fetch url list from online-url-list | `1h0m0s` +`data-availability.rest-aggregator.simple-explore-exploit-strategy.exploit-iterations` | uint32 number of consecutive GetByHash calls to the aggregator where each call will cause it to select from REST endpoints in order of best latency and success rate, before switching to explore mode | `1000` +`data-availability.rest-aggregator.simple-explore-exploit-strategy.explore-iterations` | uint32 number of consecutive GetByHash calls to the aggregator where each call will cause it to randomly select from REST endpoints until one returns successfully, before switching to exploit mode | `20` +`data-availability.rest-aggregator.strategy` | string strategy to use to determine order and parallelism of calling REST endpoint URLs; valid options are 'simple-explore-exploit' | `simple-explore-exploit` +`data-availability.rest-aggregator.strategy-update-interval` | duration how frequently to update the strategy with endpoint latency and error rate data | `10s` +`data-availability.rest-aggregator.sync-to-storage.delay-on-error` | duration time to wait if encountered an error before retrying | `1s` `data-availability.rest-aggregator.sync-to-storage.eager` | eagerly sync batch data to this DAS's storage from the rest endpoints, using L1 as the index of batch data hashes; otherwise only sync lazily | None -`data-availability.rest-aggregator.sync-to-storage.eager-lower-bound-block` | uint when eagerly syncing, start indexing forward from this L1 block. Only used if there is no sync state | None +`data-availability.rest-aggregator.sync-to-storage.eager-lower-bound-block` | uint when eagerly syncing, start indexing forward from this L1 block. Only used if there is no sync state | None `data-availability.rest-aggregator.sync-to-storage.ignore-write-errors` | log only on failures to write when syncing; otherwise treat it as an error | `true` -`data-availability.rest-aggregator.sync-to-storage.parent-chain-blocks-per-read` | uint when eagerly syncing, max l1 blocks to read per poll | `100` -`data-availability.rest-aggregator.sync-to-storage.retention-period` | duration period to request storage to retain synced data | `360h0m0s` -`data-availability.rest-aggregator.sync-to-storage.state-dir` | string directory to store the sync state in, ie the block number currently synced up to, so that we don't sync from scratch each time | None +`data-availability.rest-aggregator.sync-to-storage.parent-chain-blocks-per-read` | uint when eagerly syncing, max l1 blocks to read per poll | `100` +`data-availability.rest-aggregator.sync-to-storage.retention-period` | duration period to request storage to retain synced data | `360h0m0s` +`data-availability.rest-aggregator.sync-to-storage.state-dir` | string directory to store the sync state in, ie the block number currently synced up to, so that we don't sync from scratch each time | None `data-availability.rest-aggregator.sync-to-storage.sync-expired-data` | sync even data that is expired; needed for mirror configuration | `true` -`data-availability.rest-aggregator.urls` | strings list of URLs including 'http://' or 'https://' prefixes and port numbers to REST DAS endpoints; additive with the online-url-list option | None -`data-availability.rest-aggregator.wait-before-try-next` | duration time to wait until trying the next set of REST endpoints while waiting for a response; the next set of REST endpoints is determined by the strategy selected | `2s` -`data-availability.s3-storage.access-key` | string S3 access key | None -`data-availability.s3-storage.bucket` | string S3 bucket | None +`data-availability.rest-aggregator.urls` | strings list of URLs including 'http://' or 'https://' prefixes and port numbers to REST DAS endpoints; additive with the online-url-list option | None +`data-availability.rest-aggregator.wait-before-try-next` | duration time to wait until trying the next set of REST endpoints while waiting for a response; the next set of REST endpoints is determined by the strategy selected | `2s` +`data-availability.s3-storage.access-key` | string S3 access key | None +`data-availability.s3-storage.bucket` | string S3 bucket | None `data-availability.s3-storage.discard-after-timeout` | discard data after its expiry timeout | None `data-availability.s3-storage.enable` | enable storage/retrieval of sequencer batch data from an AWS S3 bucket | None -`data-availability.s3-storage.object-prefix` | string prefix to add to S3 objects | None -`data-availability.s3-storage.region` | string S3 region | None -`data-availability.s3-storage.secret-key` | string S3 secret key | None -`data-availability.sequencer-inbox-address` | string parent chain address of SequencerInbox contract | None +`data-availability.s3-storage.object-prefix` | string prefix to add to S3 objects | None +`data-availability.s3-storage.region` | string S3 region | None +`data-availability.s3-storage.secret-key` | string S3 secret key | None +`data-availability.sequencer-inbox-address` | string parent chain address of SequencerInbox contract | None `enable-rest` | enable the REST server listening on rest-addr and rest-port | None `enable-rpc` | enable the HTTP-RPC server listening on rpc-addr and rpc-port | None -`log-level` | string log level, valid values are CRIT, ERROR, WARN, INFO, DEBUG, TRACE | `INFO` -`log-type` | string log type (plaintext or json) | `plaintext` +`log-level` | string log level, valid values are CRIT, ERROR, WARN, INFO, DEBUG, TRACE | `INFO` +`log-type` | string log type (plaintext or json) | `plaintext` `metrics` | enable metrics | None -`metrics-server.addr` | string metrics server address | `127.0.0.1` -`metrics-server.port` | int metrics server port | `6070` -`metrics-server.update-interval` | duration metrics server update interval | `3s` +`metrics-server.addr` | string metrics server address | `127.0.0.1` +`metrics-server.port` | int metrics server port | `6070` +`metrics-server.update-interval` | duration metrics server update interval | `3s` `pprof` | enable pprof | None -`pprof-cfg.addr` | string pprof server address | `127.0.0.1` -`pprof-cfg.port` | int pprof server port | `6071` -`rest-addr` | string REST server listening interface | `localhost` -`rest-port` | uint REST server listening port | `9877` -`rest-server-timeouts.idle-timeout` | duration the maximum amount of time to wait for the next request when keep-alives are enabled (http.Server.IdleTimeout) | `2m0s` -`rest-server-timeouts.read-header-timeout` | duration the amount of time allowed to read the request headers (http.Server.ReadHeaderTimeout) | `30s` -`rest-server-timeouts.read-timeout` | duration the maximum duration for reading the entire request (http.Server.ReadTimeout) | `30s` -`rest-server-timeouts.write-timeout` | duration the maximum duration before timing out writes of the response (http.Server.WriteTimeout) | `30s` -`rpc-addr` | string HTTP-RPC server listening interface | `localhost` -`rpc-port` | uint HTTP-RPC server listening port | `9876` -`rpc-server-body-limit` | int HTTP-RPC server maximum request body size in bytes; the default (0) uses geth's 5MB limit | None -`rpc-server-timeouts.idle-timeout` | duration the maximum amount of time to wait for the next request when keep-alives are enabled (http.Server.IdleTimeout) | `2m0s` -`rpc-server-timeouts.read-header-timeout` | duration the amount of time allowed to read the request headers (http.Server.ReadHeaderTimeout) | `30s` -`rpc-server-timeouts.read-timeout` | duration the maximum duration for reading the entire request (http.Server.ReadTimeout) | `30s` -`rpc-server-timeouts.write-timeout` | duration the maximum duration before timing out writes of the response (http.Server.WriteTimeout) | `30s` +`pprof-cfg.addr` | string pprof server address | `127.0.0.1` +`pprof-cfg.port` | int pprof server port | `6071` +`rest-addr` | string REST server listening interface | `localhost` +`rest-port` | uint REST server listening port | `9877` +`rest-server-timeouts.idle-timeout` | duration the maximum amount of time to wait for the next request when keep-alives are enabled (http.Server.IdleTimeout) | `2m0s` +`rest-server-timeouts.read-header-timeout` | duration the amount of time allowed to read the request headers (http.Server.ReadHeaderTimeout) | `30s` +`rest-server-timeouts.read-timeout` | duration the maximum duration for reading the entire request (http.Server.ReadTimeout) | `30s` +`rest-server-timeouts.write-timeout` | duration the maximum duration before timing out writes of the response (http.Server.WriteTimeout) | `30s` +`rpc-addr` | string HTTP-RPC server listening interface | `localhost` +`rpc-port` | uint HTTP-RPC server listening port | `9876` +`rpc-server-body-limit` | int HTTP-RPC server maximum request body size in bytes; the default (0) uses geth's 5MB limit | None +`rpc-server-timeouts.idle-timeout` | duration the maximum amount of time to wait for the next request when keep-alives are enabled (http.Server.IdleTimeout) | `2m0s` +`rpc-server-timeouts.read-header-timeout` | duration the amount of time allowed to read the request headers (http.Server.ReadHeaderTimeout) | `30s` +`rpc-server-timeouts.read-timeout` | duration the maximum duration for reading the entire request (http.Server.ReadTimeout) | `30s` +`rpc-server-timeouts.write-timeout` | duration the maximum duration before timing out writes of the response (http.Server.WriteTimeout) | `30s` ## Notes diff --git a/charts/nitro/Chart.yaml b/charts/nitro/Chart.yaml index 5529ffa..352f8f1 100644 --- a/charts/nitro/Chart.yaml +++ b/charts/nitro/Chart.yaml @@ -7,6 +7,6 @@ maintainers: type: application -version: 0.6.15 +version: 0.6.16 -appVersion: "v3.1.2-309340a" +appVersion: "v3.2.0-f847be0" diff --git a/charts/nitro/README.md b/charts/nitro/README.md index 305a4dc..349e202 100644 --- a/charts/nitro/README.md +++ b/charts/nitro/README.md @@ -298,6 +298,7 @@ Option | Description | Default `execution.parent-chain-reader.subscribe-err-interval` | duration interval for subscribe error | `5m0s` `execution.parent-chain-reader.tx-timeout` | duration timeout when waiting for a transaction | `5m0s` `execution.parent-chain-reader.use-finality-data` | use l1 data about finalized/safe blocks | `true` +`execution.recording-database.max-prepared` | int max references to store in the recording database | `1000` `execution.recording-database.trie-clean-cache` | int like trie-clean-cache for the separate, recording database (used for validation) | `16` `execution.recording-database.trie-dirty-cache` | int like trie-dirty-cache for the separate, recording database (used for validation) | `1024` `execution.rpc.allow-method` | strings list of whitelisted rpc methods | None @@ -336,8 +337,9 @@ Option | Description | Default `execution.sequencer.queue-size` | int size of the pending tx queue | `1024` `execution.sequencer.queue-timeout` | duration maximum amount of time transaction can wait in queue | `12s` `execution.sequencer.sender-whitelist` | strings comma separated whitelist of authorized senders (if empty, everyone is allowed) | None -`execution.stylus-target.amd64` | string stylus programs compilation target for amd64 linux | `x86_64-linux-unknown+sse4.2` +`execution.stylus-target.amd64` | string stylus programs compilation target for amd64 linux | `x86_64-linux-unknown+sse4.2+lzcnt+bmi` `execution.stylus-target.arm64` | string stylus programs compilation target for arm64 linux | `arm64-linux-unknown+neon` +`execution.stylus-target.extra-archs` | strings Comma separated list of extra architectures to cross-compile stylus program to and cache in wasm store (additionally to local target). Currently must include at least wavm. (supported targets: wavm, arm64, amd64, host) | `[wavm]` `execution.stylus-target.host` | string stylus programs compilation target for system other than 64-bit ARM or 64-bit x86 | None `execution.sync-monitor.finalized-block-wait-for-block-validator` | wait for block validator to complete before returning finalized block number | None `execution.sync-monitor.safe-block-wait-for-block-validator` | wait for block validator to complete before returning safe block number | None @@ -375,13 +377,14 @@ Option | Description | Default `init.empty` | init with empty state | None `init.force` | if true: in case database exists init code will be reexecuted and genesis block compared to database | None `init.import-file` | string path for json data to import | None +`init.import-wasm` | if set, import the wasm directory when downloading a database (contains executable code - only use with highly trusted source) | None `init.latest` | string if set, searches for the latest snapshot of the given kind (accepted values: "archive" | "pruned" | "genesis") | None `init.latest-base` | string base url used when searching for the latest | `https://snapshot.arbitrum.foundation/` `init.prune` | string pruning for a given use: "full" for full nodes serving RPC requests, or "validator" for validators | None `init.prune-bloom-size` | uint the amount of memory in megabytes to use for the pruning bloom filter (higher values prune better) | `2048` `init.prune-threads` | int the number of threads to use when pruning | `10` `init.prune-trie-clean-cache` | int amount of memory in megabytes to cache unchanged state trie nodes with when traversing state database during pruning | `600` -`init.rebuild-local-wasm` | rebuild local wasm database on boot if needed (otherwise-will be done lazily) | `true` +`init.rebuild-local-wasm` | string rebuild local wasm database on boot if needed (otherwise-will be done lazily). Three modes are supported "auto"- (enabled by default) if any previous rebuilding attempt was successful then rebuilding is disabled else continues to rebuild, "force"- force rebuilding which would commence rebuilding despite the status of previous attempts, "false"- do not rebuild on startup (default "auto") | None `init.recreate-missing-state-from` | uint block number to start recreating missing states from (0 = disabled) | None `init.reorg-to-batch` | int rolls back the blockchain to a specified batch number | `-1` `init.reorg-to-block-batch` | int rolls back the blockchain to the first batch at or before a given block number | `-1` @@ -461,14 +464,16 @@ Option | Description | Default `node.batch-poster.reorg-resistance-margin` | duration do not post batch if its within this duration from layer 1 minimum bounds. Requires l1-block-bound option not be set to "ignore" | `10m0s` `node.batch-poster.use-access-lists` | post batches with access lists to reduce gas usage (disabled for L3s) | `true` `node.batch-poster.wait-for-max-delay` | wait for the max batch delay, even if the batch is full | None +`node.block-validator.batch-cache-limit` | uint32 limit number of old batches to keep in block-validator | `20` `node.block-validator.current-module-root` | string current wasm module root ('current' read from chain, 'latest' from machines/latest dir, or provide hash) | `current` `node.block-validator.dangerous.reset-block-validation` | resets block-by-block validation, starting again at genesis | None `node.block-validator.enable` | enable block-by-block validation | None `node.block-validator.failure-is-fatal` | failing a validation is treated as a fatal error | `true` -`node.block-validator.forward-blocks` | uint prepare entries for up to that many blocks ahead of validation (small footprint) | `1024` +`node.block-validator.forward-blocks` | uint prepare entries for up to that many blocks ahead of validation (stores batch-copy per block) | `128` `node.block-validator.memory-free-limit` | string minimum free-memory limit after reaching which the blockvalidator pauses validation. Enabled by default as 1GB, to disable provide empty string | `default` `node.block-validator.pending-upgrade-module-root` | string pending upgrade wasm module root to additionally validate (hash, 'latest' or empty) | `latest` `node.block-validator.prerecorded-blocks` | uint record that many blocks ahead of validation (larger footprint) | `20` +`node.block-validator.recording-iter-limit` | uint limit on block recordings sent per iteration | `20` `node.block-validator.redis-validation-client-config.create-streams` | create redis streams if it does not exist | `true` `node.block-validator.redis-validation-client-config.name` | string validation client name | `redis validation client` `node.block-validator.redis-validation-client-config.producer-config.check-pending-interval` | duration interval in which producer checks pending messages whether consumer processing them is inactive | `1s` @@ -503,8 +508,8 @@ Option | Description | Default `node.data-availability.rest-aggregator.max-per-endpoint-stats` | int number of stats entries (latency and success rate) to keep for each REST endpoint; controls whether strategy is faster or slower to respond to changing conditions | `20` `node.data-availability.rest-aggregator.online-url-list` | string a URL to a list of URLs of REST das endpoints that is checked at startup; additive with the url option | None `node.data-availability.rest-aggregator.online-url-list-fetch-interval` | duration time interval to periodically fetch url list from online-url-list | `1h0m0s` -`node.data-availability.rest-aggregator.simple-explore-exploit-strategy.exploit-iterations` | int number of consecutive GetByHash calls to the aggregator where each call will cause it to select from REST endpoints in order of best latency and success rate, before switching to explore mode | `1000` -`node.data-availability.rest-aggregator.simple-explore-exploit-strategy.explore-iterations` | int number of consecutive GetByHash calls to the aggregator where each call will cause it to randomly select from REST endpoints until one returns successfully, before switching to exploit mode | `20` +`node.data-availability.rest-aggregator.simple-explore-exploit-strategy.exploit-iterations` | uint32 number of consecutive GetByHash calls to the aggregator where each call will cause it to select from REST endpoints in order of best latency and success rate, before switching to explore mode | `1000` +`node.data-availability.rest-aggregator.simple-explore-exploit-strategy.explore-iterations` | uint32 number of consecutive GetByHash calls to the aggregator where each call will cause it to randomly select from REST endpoints until one returns successfully, before switching to exploit mode | `20` `node.data-availability.rest-aggregator.strategy` | string strategy to use to determine order and parallelism of calling REST endpoint URLs; valid options are 'simple-explore-exploit' | `simple-explore-exploit` `node.data-availability.rest-aggregator.strategy-update-interval` | duration how frequently to update the strategy with endpoint latency and error rate data | `10s` `node.data-availability.rest-aggregator.sync-to-storage.delay-on-error` | duration time to wait if encountered an error before retrying | `1s` @@ -725,15 +730,17 @@ Option | Description | Default `validation.api-auth` | validate is an authenticated API | `true` `validation.api-public` | validate is a public API | None `validation.arbitrator.execution-run-timeout` | duration timeout before discarding execution run | `15m0s` -`validation.arbitrator.execution.cached-challenge-machines` | int how many machines to store in cache while working on a challenge (should be even) | `4` +`validation.arbitrator.execution.cached-challenge-machines` | uint how many machines to store in cache while working on a challenge (should be even) | `4` `validation.arbitrator.execution.initial-steps` | uint initial steps between machines | `100000` `validation.arbitrator.output-path` | string path to write machines to | `./target/output` +`validation.arbitrator.redis-validation-server-config.buffer-reads` | buffer reads (read next while working) | `true` `validation.arbitrator.redis-validation-server-config.consumer-config.keepalive-timeout` | duration timeout after which consumer is considered inactive if heartbeat wasn't performed | `5m0s` `validation.arbitrator.redis-validation-server-config.consumer-config.response-entry-timeout` | duration timeout for response entry | `1h0m0s` `validation.arbitrator.redis-validation-server-config.module-roots` | strings Supported module root hashes | None `validation.arbitrator.redis-validation-server-config.redis-url` | string url of redis server | None `validation.arbitrator.redis-validation-server-config.stream-prefix` | string prefix for stream name | None `validation.arbitrator.redis-validation-server-config.stream-timeout` | duration Timeout on polling for existence of redis streams | `10m0s` +`validation.arbitrator.redis-validation-server-config.workers` | int number of validation threads (0 to use number of CPUs) | None `validation.arbitrator.workers` | int number of concurrent validation threads | None `validation.jit.cranelift` | use Cranelift instead of LLVM when validating blocks using the jit-accelerated block validator | `true` `validation.jit.wasm-memory-usage-limit` | int if memory used by a jit wasm exceeds this limit, a warning is logged | `4294967296` diff --git a/charts/relay/Chart.yaml b/charts/relay/Chart.yaml index 26bbc36..54a4534 100644 --- a/charts/relay/Chart.yaml +++ b/charts/relay/Chart.yaml @@ -7,6 +7,6 @@ maintainers: type: application -version: 0.5.6 +version: 0.5.7 -appVersion: "v3.1.2-309340a" +appVersion: "v3.2.0-f847be0"