-
Notifications
You must be signed in to change notification settings - Fork 337
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Attempting to modify a fetch after it has completed / aborted / failed #448
Comments
The problem with synchronous is that "a fetch" isn't necessarily on the same thread as JavaScript. So I guess you could keep some local state, but there's some state propagation that needs to happen. This becomes more problematic if we ever introduce request coalescing. And then there's the question of whether such propagation happens when a service worker takes over a request from a fetch (which results in a nested fetch of sorts). |
As I said in the DM, I don't think we should decide anything until we know the mechanism for aborting. |
So, when I initially wrote this, it's not that I wasn't considering fetch on a separate thread from JavaScript - I was, if maybe a bit simplistically: Here's an example of a "normal" fetch being aborted in my model, where the JS thread keeps a local state of the fetch, as Anne described (here represented by the pink "Sync-fetch-aborted zone", where the JS thread's local state reflects the fetch having been aborted): The idea here was that aborting a fetch would be synchronous, as nothing coming after the line calling The aspect of all this that I hadn't fully been considering is that the event for completion of a fetch is something that gets scheduled, not visited, as the JS thread runs (the blue arrow running right to left in these diagrams) - as I knew, a fetch that completes while the JS thread isn't idle will be scheduled to run once the current sync block cedes back to the event loop: What I'd been overlooking, though, was that this event can be scheduled for execution at the completion of a fetch that arrives before the JS thread aborts the fetch: Because, the fetch having already completed in this scenario, this event is already "in the pipe" at the point where we call
but there's never been, as far as I know, any need for such a facility in JS to have an event "un-happen" (or to complicate event dispatch) like this, and I don't want to be the asshead who obliviously introduced a cause for one (especially as the async-abort model is less wasteful of the surrounding state in these circumstances). So, yeah: I'm coming around to the idea that Furthermore, I think it's important that we note that this will be a pattern that will likely be repeated for other kinds of fetch control: if aborting a fetch that's already ended is an errorless nop, then modifying its priority should be treated as errorless as well, for the same reasons (ie. because it can happen as a routine part of normal operation). |
What you also want to keep into account is that if service workers can observe the state and perhaps propagate it to any fetches they do themselves, you have up to three threads/processes that need to keep in check, all through message passing. |
@stuartpb I think we can close this now? |
Uh, yeah, I'd say so, this was pretty much settled by #448 (comment), and it's certainly past relevance now that #523 is up - this issue is mostly of historical concern (hence the big post-facto diagrammitization of the issues with my original proposal). |
Thanks! |
Documenting a side-conversation with @jakearchibald via Twitter DM spun off from #447:
Whatever approach we go with for aborting a fetch, there needs to be the question of what should happen when code tries to abort a fetch that has already finished, in one manner or another (completed, timed out, server hung up, already aborted by the exact same code, whatever).
(Note that I'm using the term "a fetch" to refer to an in-progress request dispatched by a caller of
fetch()
, instead of "a request", as Request right now can describe the object that a Service Worker receives as part of thefetch
event that describes an inert request that has yet to be made, and I am taking pains to make the distinction.)You could make the case that this is harmless, especially in a design where aborted fetches don't resolve/reject in the same way as a normal request/response (the way some of the Canceling proposals worked, for instance, especially the ones with a notion of "signaling disinterest and unsubscribing from the response").
However, my stance is that cancelling a request should be a synchronous operation that throws an error if the request has already terminated.
Note that I have since changed my mind about this
Skip to my next comment below, posted a few weeks later. It describes why going forward with the design described by the rest of this comment would be a bad idea.
It should be synchronous (not that I've really seen any contention around this, but just to be clear) because, as soon as the request has been aborted, there's no more room for any events in its lifecycle (not counting events around the lifecycle, such as rejecting any Promises etc waiting on the fetch, which should of course be triggered async as normal). Even if there were any I/O relating to the request after we've aborted it, it would no longer be considered part of the fetch. Once it's aborted, the fetch is closed - there's no in-between. A fetch has either finished or it hasn't, and once it's finished, it can't finish again.
It's my stance that attempting to abort a fetch that's already closed should throw an error (even if aborting doesn't cause the fetch to terminate, which, for what it's worth, I think would be wrong). This is the case I laid out to Jake:
It's really easy to avoid aborting a fetch that's already finished: whatever has the
.abort()
method (Request, FetchController, whatever) ought to have a.closed
(or.finished
or whatever) right next to it that, if your fetch-aborting code may potentially act on an already-finished fetch, it can check for (ieif (!rq.closed) rq.abort()
orif (!rq.status.finished) rq.control.stop()
or whatever).It's not hard to imagine an edge case where this should be handled but could be overlooked. If a developer mistakenly attempts to abort a finished request, they get an error they can easily modify the code to avert: if a developer mistakenly aborts a resolved fetch that should have been handled differently and it doesn't throw an error, they'll get no indication of what's going wrong, short of hopping in the debugger and fiddling with breakpoints for an hour.
"Double resolution" errors
Say a Service Worker could be written for an endpoint that takes a long time to find invalid records, and has a quota for number of invalid requests that can be initiated. Under the assumption that all requests for resources it discovers are invalid will still be in progress when it aborts them, the service worker decrements the "Failed request" quota every time it aborts a fetch. The SW also handles fetches that come back as errors by decrementing the quota. If the endpoint speeds up one day, and all the requests that were previously being aborted before finishing are now being aborted after, with silent aborts, the quota would be depleted at double the normal rate, with no indication as to why. If trying to abort a request that's already finished throws an error, this pops the applicable error, and potentially averts whatever wrong behavior would have ensued after the double-abort.
The text was updated successfully, but these errors were encountered: