You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After upgrading from v1.7.7 to 2.1.0 we noticed OOMs in Daemonset's efs-csi-node pods after upgrade.
Before the upgrade, we set a 150Mi memory requests/limits and didn't hit it. After the upgrade, we consistently hit the Memory Limit and didn't stop until we increased the requests/limit to 500Mi
Our load and distribution of pods with EFS Mounts to nodes didn't change. We use EFS Mounts with Encryption Enabled. No any configuration overrides, everything is using default configurations as installed by the chart.
Given that this is a daemonset pod, any increase in memory is multiplied by the node count, and in our case, this is a significant increase in memory requests for a daemonset pod.
What you expected to happen?
Average memory consumption to not triple after upgrade
How to reproduce it (as minimally and precisely as possible)?
In our case it's upgrading. The increase is consistent across 9 clusters my team's operate.
Anything else we need to know?:
Environment
Kubernetes version (use kubectl version): v1.29.10-eks-7f9249a
Driver version: v2.1.0
Below is the Average Memory Usage per Daemonset Pod across 9 different clusters we have and about (~600 pod)
The load and density of pods that write to EFS didn't change. The graph shows how upgrading to v2 with the new EFS Utils is using at least 600% more memory.
The text was updated successfully, but these errors were encountered:
sherifabdlnaby
changed the title
Using efs-utils in aws-efs-driver use to 6X more memory
Using efs-utils v2 in aws-efs-driver use to 6X more memory
Dec 11, 2024
Hi, this increased memory footprint is the result of replacing stunnel with an in-house AWS component, efs-proxy, in efs-csi-driver v2.0+. efs-proxy is designed to optimize throughput by utilizing more aggressive caching and at higher throughput levels, efs-proxy employs TCP multiplexing to achieve higher throughput, which can also increase the memory usage.
I have the same issue and I saw the efs CSI container growing consistently over 800 MB of RAM.
Is there a way to limit the memory reserved for caching or to configure the eviction strategy?
I don't think caching is useful at all for the apps i'm running so I would like to at least limit it
Hello Team 👋🏻
I wanted to bring this issue to your attention!
kubernetes-sigs/aws-efs-csi-driver#1523
The text was updated successfully, but these errors were encountered: