-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handle LogPoller edge case involving backfill followed by an error #11298
Handle LogPoller edge case involving backfill followed by an error #11298
Conversation
I see that you haven't updated any CHANGELOG files. Would it make sense to do so? |
6ec087c
to
1f40807
Compare
1f40807
to
ba7f2ee
Compare
4cf9e37
to
c68464b
Compare
// Tests an edge case where a re-org happens when the number of blocks in the db so far | ||
// is less than the re-org depth: | ||
// | ||
// Orig: 1 <- 2 <- 3 <- 4 <- 5 <- (6) ( lp starts on block 6 ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lp should never start on an unfinalized block though?
@@ -679,7 +679,7 @@ func (lp *logPoller) backfill(ctx context.Context, start, end int64) error { | |||
} | |||
|
|||
lp.lggr.Debugw("Backfill found logs", "from", from, "to", to, "logs", len(gethLogs), "blocks", blocks) | |||
err = lp.orm.InsertLogs(convertLogs(gethLogs, blocks, lp.lggr, lp.ec.ConfiguredChainID()), pg.WithParentCtx(ctx)) | |||
err = lp.orm.InsertLogsWithBlock(convertLogs(gethLogs, blocks, lp.lggr, lp.ec.ConfiguredChainID()), blocks[len(blocks)-1], pg.WithParentCtx(ctx)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
If there is a backfill followed by an error, logs get written to the db but no blocks. This will make the logpoller (or backup log poller) rely on the chain next time instead of the db to determine lastFinalizedBlock, which could result in a gap in logs processed. Fixing this by writing the last block in the backfil to the db along with the logs. Its lastFinalizedBlock field will be set to its own block number + 1, so if the db crashes it should start by pulling the logs from that one.
26a398c
to
3c48602
Compare
SonarQube Quality Gate |
* develop: (56 commits) [TT-367] [TT-745] Quick and Dirty OCRv2 Soak Test (#11487) [FUN-990] s4 observability improvements (#11512) fix health monitoring (#11558) Removes Optimism Goerli from Scheduled Tests (#11559) bump Foundry to the December release (#11540) Standardize LP filter logging (#11515) Change keepers to use the default contract transmitter (#11308) bump toml/v2 and prometheus to latest patch (#11541) Remove big from core utils (#11511) Handle edge case involving blocks not being found in the db (#11298) [DEPLOY-178]: Adds Scroll L2EP Contracts (#11405) disable kaniko fallback, increase deploy wait timeout (#11548) Use multiple EL clients with ocrv2 median smoke test (#11399) Remove core utils dependencies from common (#11425) [BCF-2760] Flakey test detection improvements (#11470) go.mods: rm libp2p; rm btcd replace (#11502) wrap devspace commands (#11530) small improvements based on comments (#11491) (test): Remove unnecessary fuzzing from Functions OnTokenTransfer tests (#11517) core/scripts/common: rm ava-labs/coreth; lint (#11451) ...
If there is a backfill followed by an error, logs get written to the db but no blocks. This will make the logpoller (or backup log poller) rely on the chain next time instead of the db to determine lastFinalizedBlock, which could result in a gap in logs processed. Fixing this by writing the last block in the backfil to the db along with the logs. Its lastFinalizedBlock field will be set to its own block number + 1, so if the db crashes it should start by pulling the logs from that one.
When there is a backfill on a brand new chain (no blocks yet in the db) followed by an error, logs get written to the db but no blocks. This will make the logpoller (or backup log poller) rely on the chain next time instead of the db to determine lastFinalizedBlock, which could result in a gap in logs processed. Fixing this by writing the last block in the backfil to the db along with the logs. Its lastFinalizedBlock field will be set to its own block number + 1, so if the db crashes it should start by pulling the logs from that one.