-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add EventProvider API #35
Comments
For connection pool in R2DBC, I believe currently it is planning to use object pool mechanism in reactor. FYI, for callback in general, datasource-proxy-r2dbc project provides before/after callbacks for R2DBC SPIs. |
What triggers the callback @robertroeser? Do you mean unexpected termination as from the server? We have |
@mp911de either - Connection.close closes the connection when it's subscribed to - it doesn't actually provide a callback to when the connection is closed. Ideally there would be an onClose method that returns a publisher you could subscribe to that would emit when the connection is closed. @ttddyy I don't recommend using that pool - it doesn't work very well, and you end up spending most your time queuing jobs in the ScheduledExecutor. That pool was created with the thought that what you're calling was blocking. Your code also ends up executing on a different set of threads than the netty event loop if you're using netty. I have the another pool I'll share when I can clean the code up a bit that is built for a non-blocking application, and keeps your work scheduled on netty event loops. Regarding the lifecycle proxy - you'd still need what I'm suggesting because there is no why to tell if the client's connection to the database has closed without a callback - the connection doesn't expose anywhere that the underlying tcp connection has closed without running a select query. |
@robertroeser |
@ttddyy I mean a method like Publisher onClose that emits when the connection is closed. It could be close from Connection.close() being called or it could be from the database closing the connection, or it could be because a of a network failure. Something like this - If the connection closes right now there is no way for a pool or anyone else interested to do anything about it unless they run a query - which is also not very reactive. |
I guess the goal here is to communicate a state change that's specific to an implementation (like I'd like @simonbasle and @smaldini to weigh in to be sure I'm thinking about this properly (especially within the context of the forthcoming |
On a related note: Although we named our connection a |
I think @robertroeser was referring to reactor/reactor#651 (comment) which is a reactor port of rxjava-jdbc and is a pool of blocking resources. Therefore its a pool requiring a scheduler to work in a non-blocking fashion. We are working on an alternative as explained that would be a pool of non blocking resources (which you might derive into a a pool of blocking resources given a scheduler). Specifics are yet to be announced but stay tuned as its a top priority we're working at this very moment ! Regarding Connection#onClose, reactor-netty already offers an onClose on its own Connection contract. I've yet to see if we need something else that expands to release, since reactor-netty can keep TCP connections alive in its pool too but i think one high level pool at the r2dbc level will be enough (doesn't matter to keep the connection "alive" downstream). |
Right now, we have a fully reactive object pool and a r2dbc-pool component. During our implementation, we found that
isn't necessary for us as we're performing a validation before handing out a connection and optionally when returning an object back. We cannot reconnect transparently or such as we lose transactional state. A connection can become invalid not only by a broken TCP connection but also by protocol errors where we figure that it's better to discard/close the connection than continuing with an invalid state. Also, the server state can become invalid and that is something we do not get a push about but rather we need to issue a validation query (or PING packet when speaking about MySQL). I fail to see in which case this might be useful to notify a component if a connection is not in use and it breaks. If it is in use and it breaks, then we propagate error signals. |
That is not the reactive way things - ie forcing everyone to check vs messaging passing. The api is basically broken without an on close event - you're just swallowing on events. |
You’re probably right for drivers that hold on to a connection concept. This isn’t necessarily true for all drivers. Some databases don’t maintain a persistent connection, some follow a session concept by multiplexing a single connection, others use multiple transport channels. I don’t want to generalize an implementation detail on an API that isn’t tied to communication methods. I’d rather propose to handle this feature within the driver-specific configuration, in drivers, for which this applies. Currently, I’m aware of at least five vendors, where this concept does not apply. |
You're rat-holing on the "TCP connection" - I said the connection is closed - I never said TCP connection - it's the state of being closed whatever that is. It doesn't matter if one connection equals 200 different database connections, or if fails some check. Its a shame you have to check the connection with other means, and can't be notified via message passing. |
This is at least true for databases that do not maintain a persistent duplex connection, this isn't limited to TCP, that can be UDP or Unix Domain Socket-based. Some databases open a transport connection on a per command basis when starting a conversation (e.g. execute a query). When the remote peer dies or a session times out on the remote side, it's not possible to discover this without activity on the connection. This is the implementation detail I'm referring to. |
Great and when you find out that it's not connected you issues an onClosed event so that your pool/whatever receives an event and removes it from the pool. Do I need to maintain my fork because you won't add an event when a connection closes? |
Let's align our discussion with what you want to achieve before going straight to a solution. This ticket started with clean-up for connection pooling. With having r2dbc-pool in place, we no longer face such a requirement, at least not for r2dbc-pool. I want to primarily understand what use-case we're trying to support and whether R2DBC SPI is the right place for that. Otherwise, we just keep throwing methods at SPI without actually knowing how and when they are intended to be used. Right now I understood so far that you want to catch dysfunctional connections. From that perspective, a onClose event for session closed/connection closed is just a fraction of error cases that may happen. In consequence, onClose isn't useful from a general perspective. There is way more that can go wrong (e.g. expired user account) and that can be considered dysfunctional for the purpose of obtaining a new connection. Some of these things can be recovered (with external ops) while a connection is still connected, some can't. |
From our discussion, it would make sense to rather provide an |
Since we haven't made any progress here, removing this ticket from the 0.9 SPI target. |
Right now there isn't a way to signal when a connection is closed. There needs to be a callback that is called when a connection is closed so that you can perform cleanup actions in things like a connection pool.
The text was updated successfully, but these errors were encountered: