Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vcl_vrt: Skip VCL execution if the client is gone #3998

Merged
merged 4 commits into from
Oct 18, 2023

Conversation

dridi
Copy link
Member

@dridi dridi commented Oct 13, 2023

This is the main commit message:

Once a client is reportedly gone, processing its VCL task(s) is just a
waste of resources. The execution of client-facing VCL is intercepted
and an artificial return(fail) is returned in that scenario.

Thanks to the introduction of the universal return(fail) proper error
handling and resource tear down is already in place, which makes this
change safe modulus unknown bugs. This adds a circuit breaker anywhere
in the client state machine where there is VCL execution.

A new Reset time stamp is logged to convey when a task does not complete
because the client is gone. This is a good complement to the walk away
feature and its original circuit breaker for the waiting list, but this
has not been integrated yet.

While the request is technically failed, it won't increase the vcl_fail
counter, and a new req_reset counter is incremented.

Refs #3835
Refs 61a15cb
Refs e5efc2c
Refs ba54dc9
Refs 6f50a00
Refs b881699

This change is limited to h2 reset in a broader sense than just receiving an RST_STREAM frame, and it is conceivable to achieve a similar attack for HTTP/1 sessions with a rapid TCP reset, but this would also be less effective. It should also be easier to detect and mitigate with firewall rules.

However implementing client polling for HTTP/1 clients is not as straightforward as h2, so an HTTP/1 client can effectively trigger a workload and not stick around, yet the VCL will be processed until completion. We could probably use the waiter facility, but this needs some research.

@dridi
Copy link
Member Author

dridi commented Oct 13, 2023

I intend to submit a port to the 6.0 branch after getting this change approved, if approved.

CHECK_OBJ_NOTNULL(req, REQ_MAGIC);
CAST_OBJ_NOTNULL(r2, req->transport_priv, H2_REQ_MAGIC);

Lck_Lock(&r2->h2sess->sess->mtx);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not convinced that we should take the lock here. Yes, if we don't, we might return a false positive, but only intermittently and only for bad requests. While the lock overhead hits everyone.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bugwash: +1

@TomasKorbar
Copy link

@dridi memory situation improved even further after applying this change, CPU load improved significantly. I am trying to backport this to 6.6.2 but unfortunately the 02014 test is failing for me.

*** c3 rx: stream: 0, type: WINDOW_UPDATE (8), flags: 0x00, size: 4
*** c3.0 winup->size: 1048576
*** c3 rx: stream: 1, type: WINDOW_UPDATE (8), flags: 0x00, size: 4
*** c3.1 winup->size: 1048576
---- c3.1 Wrong frame type WINDOW_UPDATE (8) wanted RST_STREAM
** c3 Waiting for stream 0

Seems the test is expecting RST_STREAM but for some reason is not getting it. @dridi got any idea what could be wrong?

@dridi
Copy link
Member Author

dridi commented Oct 16, 2023

@dridi memory situation improved even further after applying this change, CPU load improved significantly.

Thank you very much for your testing. Your observations match my expectations, and this change was approved.

Seems the test is expecting RST_STREAM but for some reason is not getting it. @dridi got any idea what could be wrong?

I had similar problems while working on this patch series, changing the behavior sometimes breaks test cases without making them irrelevant and t02014.vtc took the blunt of it.

I suspect you are witnessing the lack of h2 rxbuf for request bodies (#3661) and I don't think it will be easy to make this test case stable. Just ignore it and you should be fine.

@TomasKorbar
Copy link

Okay, will do. Thanks a lot for the suggestion.

It was particularly hard to follow once we reach client c3.
The goal is for top-level transports to report whether the client is
still present or not.
Once a client is reportedly gone, processing its VCL task(s) is just a
waste of resources. The execution of client-facing VCL is intercepted
and an artificial return(fail) is returned in that scenario.

Thanks to the introduction of the universal return(fail) proper error
handling and resource tear down is already in place, which makes this
change safe modulus unknown bugs. This adds a circuit breaker anywhere
in the client state machine where there is VCL execution.

A new Reset time stamp is logged to convey when a task does not complete
because the client is gone. This is a good complement to the walk away
feature and its original circuit breaker for the waiting list, but this
has not been integrated yet.

While the request is technically failed, it won't increase the vcl_fail
counter, and a new req_reset counter is incremented. This new behavior
is guarded by a new vcl_req_reset feature flag, enabled by default.

Refs varnishcache#3835
Refs 61a15cb
Refs e5efc2c
Refs ba54dc9
Refs 6f50a00
Refs b881699
The error check is not performed in a critical section to avoid
contention, at the risk of not seeing the error until the next
transport poll.
@dridi dridi merged commit 05a6869 into varnishcache:master Oct 18, 2023
1 check passed
@dridi dridi deleted the vcl_h2_reset branch October 18, 2023 08:15
dridi added a commit that referenced this pull request Oct 18, 2023
With #3998 we need to ensure streams are not going to skip vcl_recv if
reset faster than reaching this step for the request task.

The alternative to prevent the vcl_req_reset feature from interfering
is to simply disable it.
dridi added a commit that referenced this pull request Oct 18, 2023
Noticed while porting #3998 to the 6.0 branch with a varnishtest more
sensitive to timing.
dridi added a commit to dridi/varnish-cache that referenced this pull request Oct 18, 2023
With varnishcache#3998 we need to ensure streams are not going to skip vcl_recv if
reset faster than reaching this step for the request task.

The alternative to prevent the vcl_req_reset feature from interfering
is to simply disable it.
dridi added a commit to dridi/varnish-cache that referenced this pull request Oct 18, 2023
Noticed while porting varnishcache#3998 to the 6.0 branch with a varnishtest more
sensitive to timing.
dridi added a commit to dridi/varnish-cache that referenced this pull request Oct 18, 2023
With varnishcache#3998 we need to ensure streams are not going to skip vcl_recv if
reset faster than reaching this step for the request task.

The alternative to prevent the vcl_req_reset feature from interfering
is to simply disable it.
dridi added a commit to dridi/varnish-cache that referenced this pull request Oct 18, 2023
Noticed while porting varnishcache#3998 to the 6.0 branch with a varnishtest more
sensitive to timing.
dridi added a commit that referenced this pull request Oct 24, 2023
With #3998 we need to ensure streams are not going to skip vcl_recv if
reset faster than reaching this step for the request task.

The alternative to prevent the vcl_req_reset feature from interfering
is to simply disable it.
dridi added a commit that referenced this pull request Oct 24, 2023
Noticed while porting #3998 to the 6.0 branch with a varnishtest more
sensitive to timing.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants