Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using efs-utils v2 in aws-efs-driver use to 6X more memory #261

Open
sherifabdlnaby opened this issue Dec 11, 2024 · 2 comments
Open

Using efs-utils v2 in aws-efs-driver use to 6X more memory #261

sherifabdlnaby opened this issue Dec 11, 2024 · 2 comments

Comments

@sherifabdlnaby
Copy link

Hello Team 👋🏻
I wanted to bring this issue to your attention!
kubernetes-sigs/aws-efs-csi-driver#1523

What happened?

After upgrading from v1.7.7 to 2.1.0 we noticed OOMs in Daemonset's efs-csi-node pods after upgrade.
Before the upgrade, we set a 150Mi memory requests/limits and didn't hit it. After the upgrade, we consistently hit the Memory Limit and didn't stop until we increased the requests/limit to 500Mi

Our load and distribution of pods with EFS Mounts to nodes didn't change. We use EFS Mounts with Encryption Enabled. No any configuration overrides, everything is using default configurations as installed by the chart.

Given that this is a daemonset pod, any increase in memory is multiplied by the node count, and in our case, this is a significant increase in memory requests for a daemonset pod.

What you expected to happen?

Average memory consumption to not triple after upgrade

How to reproduce it (as minimally and precisely as possible)?

In our case it's upgrading. The increase is consistent across 9 clusters my team's operate.

Anything else we need to know?:

Environment

  • Kubernetes version (use kubectl version): v1.29.10-eks-7f9249a
  • Driver version: v2.1.0

Below is the Average Memory Usage per Daemonset Pod across 9 different clusters we have and about (~600 pod)
The load and density of pods that write to EFS didn't change. The graph shows how upgrading to v2 with the new EFS Utils is using at least 600% more memory.

CleanShot - 2024 12 10 - Google Chrome - 001163

@sherifabdlnaby sherifabdlnaby changed the title Using efs-utils in aws-efs-driver use to 6X more memory Using efs-utils v2 in aws-efs-driver use to 6X more memory Dec 11, 2024
@tdachille-dev
Copy link

copying answer from original issue-

Hi, this increased memory footprint is the result of replacing stunnel with an in-house AWS component, efs-proxy, in efs-csi-driver v2.0+. efs-proxy is designed to optimize throughput by utilizing more aggressive caching and at higher throughput levels, efs-proxy employs TCP multiplexing to achieve higher throughput, which can also increase the memory usage.

@azagarelz
Copy link

I have the same issue and I saw the efs CSI container growing consistently over 800 MB of RAM.
Is there a way to limit the memory reserved for caching or to configure the eviction strategy?
I don't think caching is useful at all for the apps i'm running so I would like to at least limit it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants