You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey, I am running the zkSync local node and hit an issue; the node supplies a message which suggests block ranges when requesting large amounts of logs, which frameworks use for indexing; all other providers like Alchemy or Tenderly do similar as well to allow you to skip over events faster instead of having to scan block by block:
{
"jsonrpc": "2.0",
"error": {
"code": -32602,
"message": "Query returned more than 10000 results. Try with this block range [0x0, 0x30f]."
},
"id": 74
}
which makes sense the indexing framework would extract that range and do the next request:
{
"jsonrpc": "2.0",
"error": {
"code": -32008,
"message": "Response is too big",
"data": "Exceeded max limit of 10485760"
},
"id": 74
}
This means indexing systems break as they expect the suggested range to always work, and indexing frameworks should not have to change the core code to handle this edge case.
🔄 Reproduction Steps
Find an event which has emitted over 10,000 events and produced a bigger payload than 10,485,760
🤔 Expected Behavior
The suggested blocks should consider the event payload response size to give a range which will always return from the node
😯 Current Behavior
The suggested blocks given back in eth_getLogs do not work meaning indexing systems would not be able to index that event
The text was updated successfully, but these errors were encountered:
🐛 Bug Report
📝 Description
Hey, I am running the zkSync local node and hit an issue; the node supplies a message which suggests block ranges when requesting large amounts of logs, which frameworks use for indexing; all other providers like Alchemy or Tenderly do similar as well to allow you to skip over events faster instead of having to scan block by block:
Doing a request like:
would yield back:
which makes sense the indexing framework would extract that range and do the next request:
but this then errors with:
This means indexing systems break as they expect the suggested range to always work, and indexing frameworks should not have to change the core code to handle this edge case.
🔄 Reproduction Steps
Find an event which has emitted over 10,000 events and produced a bigger payload than 10,485,760
🤔 Expected Behavior
The suggested blocks should consider the event payload response size to give a range which will always return from the node
😯 Current Behavior
The suggested blocks given back in eth_getLogs do not work meaning indexing systems would not be able to index that event
The text was updated successfully, but these errors were encountered: