-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Limit number of concurrent events that are processed #72
Conversation
Maybe I should also measure the difference between the processing time with and without this change (using a number of transactions that can be handled by the bridge prior to this fix). I'll do that and post the results here. |
ci is failing |
@@ -1,3 +1,4 @@ | |||
module.exports = { | |||
EXTRA_GAS_PERCENTAGE: 0.25 | |||
EXTRA_GAS_PERCENTAGE: 0.25, | |||
MAX_CONCURRENT_EVENTS: 50 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps we should run some performance tests to find which is the most efficient value for this. Maybe we can address this in a separate issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool! Let's merge this PR. I will create an issue to address this comment: #72 (comment)
This should fix, or at least improve, the problem described in #64.
The cause of the problem was that making a lot of concurrent
estimateGas
calls were causing the node to fail (I confirmed this by making a simple contract with a loop and calling it hundreds of times in parallel; let me know if you'd like to see the code). When one of this transactions failed, the whole list of events was discarded. When later the same (or more) events were received, the problem appeared again, and so on.The fix is to limit the number of concurrent events that are processed (right now to 50). With this fix, we could process a block with 8000 txs without failures.
Doing #63 might have fixed this too, but even with that I think it's a good idea to avoid choking the RPC endpoint. Besides, I'm not 100% sure that #63 would've been enough (if all the retries have the same sucessive delays, each the RPC would be choked again, albeit with less transactions each time).
Some time measurements: