-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
example: add factory mode and pglite persistence #123
Conversation
📝 Walkthrough📝 WalkthroughWalkthroughThe pull request includes modifications to the Changes
Possibly related PRs
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
646bf15
to
674a348
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Outside diff range and nitpick comments (3)
examples/cli/indexers/2-starknet.indexer.ts (3)
Line range hint
51-56
: Remove commented codeThe commented writer code should either be implemented or removed. Commented code can lead to confusion and maintenance issues.
- // const { writer } = useSink({ context }); - - // writer.insert([{ - // number: header?.blockNumber.toString(), - // hash: header?.blockHash, - // }]) + // TODO: Implement data writing logic
Line range hint
65-82
: Implement proper migration handlingDirect table creation in run:before hook is not ideal for production. Consider implementing proper migration handling using Drizzle's migration tools.
Consider moving these table definitions to proper migration files and using Drizzle's migration system. This will provide better version control and safer schema updates.
Example structure:
migrations/ 001_initial_schema.sql 002_add_indexes.sql
Line range hint
74-82
: Add indexes for better query performanceThe filters table would benefit from additional indexes for common query patterns.
CREATE TABLE IF NOT EXISTS filters ( id TEXT NOT NULL, filter TEXT NOT NULL, from_block INTEGER NOT NULL, to_block INTEGER, - PRIMARY KEY (id, from_block) + PRIMARY KEY (id, from_block), + INDEX idx_from_block (from_block), + INDEX idx_to_block (to_block) );
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (5)
.gitignore
(1 hunks)change/apibara-5146fc27-9558-48ae-819b-7000820431b8.json
(1 hunks)examples/cli/indexers/2-starknet.indexer.ts
(4 hunks)examples/cli/indexers/3-starknet-factory.indexer.ts
(1 hunks)packages/cli/src/rollup/config.ts
(1 hunks)
✅ Files skipped from review due to trivial changes (2)
- change/apibara-5146fc27-9558-48ae-819b-7000820431b8.json
- .gitignore
🔇 Additional comments (1)
examples/cli/indexers/3-starknet-factory.indexer.ts (1)
21-24
: Verify contract address and event selector
The hardcoded contract address and event selector should be verified to ensure they match the intended factory contract.
✅ Verification successful
Contract address and event selector implementation is correct
The implementation is verified to be correct because:
- The contract address is used consistently and exclusively in this factory indexer
- The
PAIR_CREATED
selector is properly generated using Starknet's hash utility:hash.getSelectorFromName("PairCreated")
- The event selector is used appropriately in the filter configuration
- The factory pattern correctly processes these events to create new filters for swap events
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for any other usage of this contract address to verify consistency
rg "0x00dad44c139a476c7a17fc8141e6db680e9abc9f56fe249a105094c44382c2fd"
Length of output: 205
Script:
#!/bin/bash
# Search for any event selector constants or related configuration
rg "PAIR_CREATED" -A 2
Length of output: 533
Script:
#!/bin/bash
# Look for any factory-related constants or configuration files
fd -e ts -e js -e json "factory" --exec cat {}
Length of output: 1553
async transform({ block, endCursor }) { | ||
const logger = useLogger(); | ||
logger.log("Transforming ", endCursor?.orderKey); | ||
}, | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implement data transformation logic
The transform function currently only logs the cursor without processing the events. Consider implementing the actual transformation logic for the swap events.
Would you like help implementing the transformation logic for swap events?
const poolEvents = (events ?? []).flatMap((event) => { | ||
const pairAddress = event.data?.[2]; | ||
|
||
logger.log(`Factory: PairAddress - ${pairAddress}`); | ||
return { | ||
address: pairAddress, | ||
keys: [SWAP], | ||
includeReceipt: false, | ||
}; | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling for missing event data
The code assumes event.data[2] exists without validation. Consider adding null checks and error handling.
const poolEvents = (events ?? []).flatMap((event) => {
+ if (!event.data || event.data.length < 3) {
+ logger.warn('Invalid event data structure');
+ return [];
+ }
const pairAddress = event.data[2];
Committable suggestion skipped: line range outside the PR's diff.
@@ -12,7 +12,7 @@ import { appConfig } from "./plugins/config"; | |||
import { esmShim } from "./plugins/esm-shim"; | |||
import { indexers } from "./plugins/indexers"; | |||
|
|||
const runtimeDependencies = ["better-sqlite3"]; | |||
const runtimeDependencies = ["better-sqlite3", "@electric-sql/pglite"]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Missing required dependency @electric-sql/pglite
The dependency @electric-sql/pglite is listed in runtimeDependencies but is not declared in the package.json dependencies. This needs to be added to ensure proper package installation and functionality.
- Add @electric-sql/pglite to the dependencies section in packages/cli/package.json
🔗 Analysis chain
Verify pglite dependency declaration
The addition of @electric-sql/pglite looks correct, but ensure it's properly declared in package.json.
Let me check the package.json in the cli package directory since the file under review is in packages/cli.
Let me check if it might be in devDependencies or if there are any other references to this package in package.json files.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if dependency is declared in package.json
jq '.dependencies["@electric-sql/pglite"]' package.json
Length of output: 62
Script:
#!/bin/bash
# Check if dependency is declared in cli package.json
jq '.dependencies["@electric-sql/pglite"]' packages/cli/package.json
Length of output: 75
Script:
#!/bin/bash
# Check both dependencies and devDependencies in cli package.json
echo "CLI package.json contents:"
cat packages/cli/package.json
# Search for any package.json files referencing pglite
echo -e "\nSearching for pglite in all package.json files:"
rg -l "@electric-sql/pglite" "**/package.json"
Length of output: 3396
const persistDatabase = drizzle("./.persistence", { | ||
logger: true, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid hardcoding persistence path
The persistence path should be configurable through runtime configuration rather than hardcoded.
- const persistDatabase = drizzle("./.persistence", {
+ const persistDatabase = drizzle(runtimeConfig.persistencePath || "./.persistence", {
Committable suggestion skipped: line range outside the PR's diff.
return { | ||
address: pairAddress, | ||
keys: [SWAP], | ||
includeReceipt: false, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not needed anymore.
includeReceipt: false, |
}); | ||
return { | ||
filter: { | ||
header: "always", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
header: "always", |
Leave this to the default value or the stream will receive all headers, even if there is no data.
}, | ||
async transform({ block, endCursor }) { | ||
const logger = useLogger(); | ||
logger.log("Transforming ", endCursor?.orderKey); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe add a for loop over the events to print the contract address and the tx hash? That way we get better output.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (3)
examples/cli/indexers/3-starknet-factory.indexer.ts (3)
9-10
: Add input validation for shortAddress utilityThe utility function should validate the input address format to ensure it meets the expected hexadecimal string pattern.
const shortAddress = (addr?: string) => - addr ? `${addr.slice(0, 6)}...${addr.slice(-4)}` : ""; + addr && addr.match(/^0x[a-fA-F0-9]+$/) + ? `${addr.slice(0, 6)}...${addr.slice(-4)}` + : "";
14-18
: Consider externalizing configuration valuesThe streamUrl, finality, and startingCursor values should be configurable through environment variables or the runtime config to support different environments.
- streamUrl: "https://starknet.preview.apibara.org", - finality: "accepted", - startingCursor: { - orderKey: 800_000n, - }, + streamUrl: runtimeConfig.streamUrl ?? "https://starknet.preview.apibara.org", + finality: runtimeConfig.finality ?? "accepted", + startingCursor: runtimeConfig.startingCursor ?? { + orderKey: 800_000n, + },
12-65
: Consider implementing retry mechanisms and circuit breakersThe indexer should be resilient to network issues and API failures. Consider implementing:
- Retry mechanisms for failed API calls
- Circuit breakers to prevent overwhelming the Starknet API
- Backoff strategies for rate limiting
This will improve the indexer's reliability in production environments. Would you like help implementing these resilience patterns?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
examples/cli/indexers/3-starknet-factory.indexer.ts
(1 hunks)
🔇 Additional comments (3)
examples/cli/indexers/3-starknet-factory.indexer.ts (3)
20-20
: Remove unnecessary header configuration
As noted in previous reviews, setting header: "always"
is unnecessary as it will cause the stream to receive all headers, even without data.
- header: "always",
32-43
:
Enhance error handling and data validation
The factory function should implement comprehensive error handling and data validation:
- Validate event structure
- Handle potential undefined values
- Log errors appropriately
const poolEvents = (events ?? []).flatMap((event) => {
+ if (!event || !event.data) {
+ logger.warn('Invalid event structure received');
+ return [];
+ }
+
const pairAddress = event.data?.[2];
+ if (!pairAddress || typeof pairAddress !== 'string') {
+ logger.warn('Invalid pair address in event data');
+ return [];
+ }
logger.log(
"Factory: PairAddress : ",
`\x1b[35m${pairAddress}\x1b[0m`,
);
return {
address: pairAddress,
keys: [SWAP],
};
});
Likely invalid or redundant comment.
50-63
:
Implement event data transformation and persistence
The transform function currently only logs events without processing them. Consider:
- Implementing actual event data transformation
- Adding persistence logic
- Including error handling
Here's a suggested structure:
async transform({ block, endCursor }) {
const logger = useLogger();
const { events } = block;
+ try {
+ // Process events in batches
+ const transformedEvents = await Promise.all(events.map(async (event) => {
+ if (!event || !event.data) {
+ logger.warn('Invalid event data structure');
+ return null;
+ }
+
+ // Log event details
+ logger.log(
+ "Event Address : ",
+ shortAddress(event.address),
+ "| Txn hash :",
+ shortAddress(event.transactionHash),
+ );
+
+ // Transform event data based on event type
+ if (event.keys.includes(SWAP)) {
+ return transformSwapEvent(event);
+ } else if (event.keys.includes(PAIR_CREATED)) {
+ return transformPairCreatedEvent(event);
+ }
+ return null;
+ }));
+
+ // Filter out null values and persist transformed events
+ const validEvents = transformedEvents.filter(Boolean);
+ if (validEvents.length > 0) {
+ await persistEvents(validEvents);
+ }
+ } catch (error) {
+ logger.error('Error processing events:', error);
+ throw error;
+ }
}
Would you like me to provide implementations for the transformSwapEvent
, transformPairCreatedEvent
, and persistEvents
functions?
Likely invalid or redundant comment.
@electric-sql/pglite
as external dependency in rollup for cli