-
Notifications
You must be signed in to change notification settings - Fork 175
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Alco/acquire lock in replication client #1850
Alco/acquire lock in replication client #1850
Conversation
…nager By using the tuple {:shutdown, reason} we avoid logging scary errors about each of the connection processes linked to ConnectionManager dying. Also, it doesn't make much sense to tag the shutdown reason, given that this reason is propagated to the linked processes as an exit signal, which would result in each of the connection processes logging its exit reason using the same tag.
This partially reverts c886f86.
…etup phase We already have a state machine defined for the replication connection and there's no need to check out the whole connection from the DB pool (twice) just to look up some static PG info.
This change reduces coupling between Timeline and ShapeCache modules, resulting in easier comprehension and maintenance of this small part of the code. ConnectionManager serves as the supervisor of the connection startup procedure and that's where we keep the knowledge about how a change in PG timeline affects the shape cache. As a bonus, we don't have to first start all shape consumers just to bring them all down afterwards, as we've been doing up until this change.
✅ Deploy Preview for electric-next ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice and clean! 🚀 🔒
{:query, query, state} | ||
end | ||
|
||
defp acquire_lock_result([%Postgrex.Result{columns: ["pg_advisory_lock"]}], state) do |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would we ever need to handle a Postgrex.Error
from the advisory lock query? I imagine disconnections etc are handled by the replication client and connection manager, but I don't know if this query can ever fail
@@ -6,6 +6,7 @@ defmodule Electric.Postgres.LockConnectionTest do | |||
alias Electric.Postgres.LockConnection | |||
|
|||
@moduletag :capture_log | |||
@moduletag :skip |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume you intend to "move" the lock tests to the replication client tests later?
ae60ac5
to
8381a60
Compare
After working a bit more on connection process life cycle management in #1841 I've decided to scratch this change. Even though moving the advisory lock acquisition to the replication connection would help us save one idle PG connection, it would rob us of the ability to restart the replication client independently of the other connections, namely, the lock connection and the DB pool. If we merged this PR, restarting the replication client process (which we do to ensure consistent storaged of streamed transactions by shape consumers) would always necessitate restarting the whole |
PR chain: #1842 ← #1841 ← #1845 ← #1846 ← #1850
Fixes #1792.