-
Notifications
You must be signed in to change notification settings - Fork 183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: set timeout on sendmsg to avoid memory leak #324
fix: set timeout on sendmsg to avoid memory leak #324
Conversation
mihivagyok
commented
Feb 10, 2022
- PR tries to fix Konnectivity server leaks memory and free sockets #255
- I have found that if the server and agent tunnel hangs, new requests will cause memory leak in Server
- I made the change based on the following discussion: Stream.Write() waits indefinitely if stream is full. Cancellations apply to entire stream, not messages. grpc/grpc-go#1229 (comment)
- once this change is there, the memory leak does not happen
- note: this change does not solve the issue which make the tunnel hanging
- this helps to avoid socket leaks when channel is full
Hi @mihivagyok. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/ok-to-test
@mihivagyok are you able to share steps you used to reproduce the memory leak and test your fix? I'd like to test/validate your patch against one of my development clusters |
@andrewsykim Will share the steps but let me re-validate the scenario. I let you know once I'm ready! Thanks, |
Configuration:
Test:
off topic:
We have two agent instances (one for DestHost, one for Default backend manager), but with the same Thanks, |
@mihivagyok I think I was able to reproduce this issue, will try to test your patch to see if it resolves the issue |
Are you able to check the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not able to reproduce the fix described in #261 (comment), but I also was not able to test this using multiple backend strategies. I think the patch makes sense though given grpc/grpc-go#1229.
Overall LGTM, left some minor comments
- simplify the anonymous func()
@mihivagyok I think I'm able to reproduce this issue now, and it was possible while just using the default proxy strategy. From the goroutine stacktrace, this was the semaphore lock that was blocking goroutines:
My steps to reproduce was to mimic some level of backend unavailability. |
@andrewsykim That's great news. I think my concern regarding multiple proxy strategies / backend managers are still vaild here. To fix this, I think one backend manager is needed which could be configured with multiple strategies - so the connection would be used only in one manager. The one backend manager would select from its agents based on the strategies. I think the code could be changed easily to achieve this. Do you think that if it is feasible or not? |
I'm still getting familiar with the codebase so I'm not 100% sure yet. But if you're willing to open the PR it would be helpful for me to understand your proposal better |
Btw -- it seems like the goroutine leaking due to backend connection mutex can happen for multiple reasons. In my specific test, it was due to write quota on the stream, I created a separate issue for that here #335 |
/cc @cheftako |
// wrap a timer for around SendMsg to avoid blocking grpc call | ||
// (e.g. stream is full) | ||
errChan := make(chan error, 1) | ||
go func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible for this goroutine to start leaking if b.conn.Send can block forever?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My thinking is no, because when the stream closes eventually from returning from Send, this will return an io.EOF but I'm not 100% confident about it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As we use buffered channel, I think you are right. When the steam closes, it shall free up the goroutine and channel. At least this is how I understand what I'm reading on the docs/internet:
https://www.ardanlabs.com/blog/2018/11/goroutine-leaks-the-forgotten-sender.html
Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although I kinda agree that this change just masks the real problem. Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As we use buffered channel, I think you are right. When the steam closes, it shall free up the goroutine and channel.
The buffered channel is non-blocking assuming b.conn.Send() returns a value, but if b.conn.Send() itself can be blocking right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes.
I mention the buffered channel only because that is needed to be able free up the goroutine and the channel. Thanks'
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-lifecycle rotten |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: mihivagyok The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
@mihivagyok: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |