Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HEAD requests cause "unexpected end of file" from gzip decoder #70

Open
colinbendell opened this issue Dec 11, 2019 · 4 comments
Open

Comments

@colinbendell
Copy link
Contributor

It appears HEAD requests cause the gzip decoder to throw an 'unexpected end of file' error. This is likely because the HEAD has a Content-Length but no body frame.

@grantila
Copy link
Owner

Is this on HTTP 2?

@colinbendell
Copy link
Contributor Author

yes. HEAD requests don't have a body, but it appears the use of stream.pipe() in the StreamResponse() constructor. However, when the h2stream is completed, the stream pipe is closed. However, because there is a content-length header from the head response, the Response.setBody() function assumes that there is more stream to decode thus tripping over zlib's internal problems of being able to handle the broken chunks without resorting to createGunzip({ finishFlush: zlib.Z_SYNC_FLUSH }). (see: https://github.com/nodejs/node/blob/e66a2acc4cb9fc09fc32d1833b89ae56468a0931/src/node_zlib.cc#L874 )

There are a few solutions as I see it:

  • use the createGunzip({ finishFlush: zlib.Z_SYNC_FLUSH }) so that we supress errors related to truncated gzip streams. the only major side effect is if the user was expected a gzip error on a partial stream
  • add special case for HEAD responses in fetch-http* to pass a null stream in the constructor and add a null test to handleEncoding(). This is a bit of hack since there are other scenarios where you could expect an empty body stream such as OPTIONS, PROPFIND and theoretically 304 Not Modified (though it might depend on your position about header metadata vs. cacheable metadata headers)
  • alternatively add a test on the stream.readable.length. This will be 0 at the time of the constructor, but it is very possibly misleading / racy since the stream buffer might not have been flushed yet
  • simply catch the gzip.on('error') and eat the thrown error. This is probably the least desirable since it potentially ignores other errors.

With this in mind, I might suggest adding the createGunzip({ finishFlush: zlib.Z_SYNC_FLUSH }) since this has the least adverse effects and is probably more future proof.

@pseudosavant
Copy link

I am experiencing this issue as well. Was there any decision on the path forward for a fix? I'd gladly write it up and send a pull request but I want to make sure I don't go down a dead end.

@colinbendell
Copy link
Contributor Author

I have a PR for this here: #72

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants