-
Notifications
You must be signed in to change notification settings - Fork 175
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Horizontally scalable #1459
Comments
Is there an outline of how this works anywhere online? In particular, how would shape IDs be made consistent across servers? Do I understand correctly that shape IDs are currently unique to each server? Is the (future) intent to base shape IDs and offsets off of the Postgres global txid? Also, is there a general roadmap for load balancing & failover support? Thanks! |
You'd need to make resolving shapes sticky to one instance. That's the main trick to keep the shapes consistent. You can now setup multiple Electric instances against a single DB. Load balancing shapes across multiple instances w/ stickiness & failover is now the job of setting up an HTTP proxy. We're renaming |
Thank you for the quick reply (and for the cool platform). I saw those, and I've been able to start multiple servers. Sticky connections makes fine sense. But in the case of failover, with the current system, a client would get a must-refetch and would have to resync from scratch. This is probably a dumb question, but I'll ask anyways: why does the shape handle include the current timestamp
and why can't the offset just be the current Postgres txid? Would something like that allow for load-balancing without stickiness and failover without requiring resync? |
Right now we query the source Postgres to populate shape logs on the server and we can't query Postgres at an arbitrary transaction. This does mean if a shape log disappears, clients need to re-sync. This is a trade-off where we gain simplicity in the initial implementation at the cost of re-syncing. With the caveat that re-syncing can be very fast. We are aware of different implementation strategies that could allow a fresh Electric instance to take over serving a shape without necessitating client re-sync. But these are more complex, so they're not something we're actively working on right now. |
Very cool you got that working! And yeah, resyncing is a bit annoying but you can think of Electric as a fancy cache — so our aim is to maximize cache hits but there is marginal diminishing returns to handle edge cases like automatic failover if a server crashes. |
Can run multiple Electric's against a single PG.
Can failover between and load balance across them.
Document and demonstrate.
The text was updated successfully, but these errors were encountered: