-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory spikes when using plugins with @graphql-hive/gateway@^1.0.8
#2
Comments
Hey there! Thanks for reporting. For me to debug this in depth I'd need to a bit more about the test env you have and create a benchmark that's replicating the behaviour in order for to pin-point the issue. Can you tell me:
|
@enisdenjo Thank you for the quick reply!
We're running the gateway on a docker container in a kubernetes pod. The docker container is running node 20.14.0.
This is with real traffic. It's overall constant, but we do get generally lower traffic overnight. The spikes are consistently every 15-20 minutes, but not exactly. |
We had some fights with Node and memory spikes in the past. Am wondering whether that's the case here too? Can we start by updating the Node in the container to the upcoming LTS (starting tomorrow) v22.10.0?
Spikes are also during lower traffic? |
Upgrading node to |
We recently upgraded from
graphql-mesh
v0 to@graphql-mesh/compose-cli
v1 + Hive Gateway as recommended by the migration guide.Here are our relevant dependencies and versions:
Here's our config:
Before V1, memory usage was a plateau and stable. After upgrading to use hive gateway, we immediately observed unstable memory utilization.
Zooming in, every 15 to 30 minutes, there is a sharp spike in memory.
Our only clue was this release note in mesh v0.98.7 that referenced memory leaks from plugins.
We're using a mix of homegrown plugins that perform various functions such as datadog tracing and graphql-armor vendor plugins. We ran a short experiment to turn them all off and didn't observe any memory spikes:
To isolate against the possibility of the content of our plugins being the issue, we created a barebones empty plugin to see if we still saw a memory spike just with that, and we did. Using the below config, with just a plugin that hooks into
onFetch
andonExecute
, we still saw a memory spike after around 20 minutes.This made it seem clear to us that there must be some memory leakage going on within the plugin infrastructure that could be similar to the issue referenced in the graphql-mesh release notes.
The text was updated successfully, but these errors were encountered: