-
Notifications
You must be signed in to change notification settings - Fork 934
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow sharing assigns between live navigation #3482
base: main
Are you sure you want to change the base?
Conversation
cb51076
to
5d0b028
Compare
This commit relies on a yet to be finalized feature in Phoenix Channels to perform a custom handover between channel rejoins. It changes the LV code to not leave the old channel before rejoining and also instructs Phoenix to not kill the old process before starting the new channel process. After the new channel is joined, the old one is killed. Any `assign_new` calls in the LV mount will try to fetch assigns from the old LV. This is a backwards compatible optimization, so if a version of Phoenix is used that does not support handover, it just falls back to calling the function supplied to assign_new as usual. Closes #3357.
5d0b028
to
8518e61
Compare
@SteffenDE I originally thought about another way to implement this, which is to let the server say "I have spawned up a new channel" and the client assumes it has taken over control. The benefit of doing it this way is that you don't need to keep the process around. You immediately spawn the new process passing the assigns it needs and that's it. Do you have thoughts about this route? |
I'll actually need to handle push_navigate as well. Forgot about that. I need to think about the consequences of your proposal (e.g. we'll still want to start the new process under the channel's supervisor). Currently, I cannot really imagine what an API for this would look like. We cannot really shorten the time that the process needs to wait though, I think, as we don't know the keys from assign_new upfront - at least without looking through the LV mount's AST or just passing all assigns, which could be rather expensive. So we need to at least wait for the initial mount of the new LV, don't we? |
We will wait for the initial mount but, because we are also the one who called the initial mount, we will know if the initial mount either succeed or failed. We don't need to have a timeout that says "in case I am not pinged in 15 seconds, terminate". |
We don’t wait in the current implementation :) |
So yeah I think the current implementation is fine for client-initiated live navigation. What you described is what we need for push_navigate. |
Oh, I see, we don't wait because it is client initiated! @SteffenDE if we need the version I proposed for push_navigate, it can then be used for both push_navigate and live_navigate, right? 🤔 So it should be more general? |
Another downside of the current implementation is that it only works if you rejoin the same channel. The other version should be a bit more general... but it may also be (much?) harder to implement. :( |
Sorry to disrupt the conversation, but I couldn't understand the end-goal with the issue. |
@greven yes, the end goal is to reduce things like querying the current user from the DB on each live navigation by expanding |
TODO: check flash handling TODO: refactor js to remove duplicatings between replaceMain and handover TODO: check if should keep the existing view class or replace it completely, passing the old channel to the new one?
@josevalim I implemented a proof of concept for server initiated handovers in 2ec6c6a and phoenixframework/phoenix@cd99fcf. It breaks a couple of tests, so very much not finished, but it shows the concept. I did not change the client handovers to use this yet, but we could do it by changing the client from issuing a rejoin to instead send a message and trigger the same path. |
It would be amazing if |
@bcardarella it would, but it's imho probably nothing LiveView will ever try to do do itself. There have been lots of discussions around the double mount in the past. The main issue is that you simply cannot assume that a live render will happen at all as a dead render can be triggered by any crawler, bot, etc. without ever connecting to the WebSocket. Therefore, you'd always need a timeout on how long things would be kept in memory. If people want to do this, it should be possible to implement something like this right now by spawning a globally registered process under a key that is put into the session on the dead render, requesting the data from it during the live render. This could be implemented as a separate library and shared with the community if there is lots of demand for it. I'm not sure if it is really worth it though. Anyways, it's definitely out of scope for this issue. |
Considerations
To support sharing assigns between live navigations, there are multiple ways we could implement this:
We could have the transport own an ETS table or provide an API to assign values to it. Then LV could store assigns that use
assign_new
into this storage. Drawbacks: assign_new would have a messaging overhead, sending (potentially big) data to another process, even if it is not needed again. If an assign is updated later, we'd either need to sync those updates (keeping track which assigned keys used assign_new) or live with potentially old data.This is the solution that this PR implements (together with a PR in Phoenix). This solves all the drawbacks mentioned above: it only copies the data that is actually used for the optimization and it also always gets the most recent data from the old LV's assigns. The only new drawback that I can come up with is that the old LV needs to be in memory just a little bit longer than usual, but I think that's fine.
Changes
This PR relies on a yet to be finalized feature in Phoenix Channels (phoenixframework/phoenix#5959) to perform a custom handover between channel rejoins.
It changes the LV code to not leave the old channel before rejoining and also instructs Phoenix to not kill the old process before starting the new channel process. After the new channel is joined, the old one is killed. Any
assign_new
calls in the LV mount will try to fetch assigns from the old LV.This is a backwards compatible optimization, so if a version of Phoenix is used that does not support handover, it just falls back to calling the function supplied to assign_new as usual.
Closes #3357.
cc @josevalim