Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hibernate to free memory #33

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

SteffenDE
Copy link

So I was debugging a large binary memory usage in my application. Using :recon I found that two geolix processes were holding onto large binary references. I'm using the MMDB2 adapter. Before these changes, my BEAM instance was using ~800MB of memory.
After modifying the loader processes to hibernate and trigger a garbage collection, the memory usage went down to ~270MB.

This is the snippet using recon to find processes holding onto binaries:

Process.list() |> Enum.map(fn pid -> {pid, :recon.info(pid, :binary_memory) |> elem(1) |> then(fn x -> x / 1024 / 1024 end)} end) |> Enum.sort_by(fn {_pid, mem} -> mem end, :desc) |> Enum.take(4)

Resulting in:

[
  {#PID<0.766.0>, 531.5929861068726},
  {#PID<0.774.0>, 234.94092750549316},
  {#PID<0.590.0>, 4.768865585327148},
  {#PID<0.1678.0>, 4.103562355041504}
]

The geolix loader (in this case 0.766.0) was holding onto ~530MB of binaries while the mmdb2 loader was holding onto ~234MB.

There is a similar PR for the mmdb2 adapter. If you know the proper places where the mmdb2 adapter creates the binary references, another fix would be to call :binary.copy there instead. I don't think that hibernating here has any bad consequences though.

@hansihe
Copy link

hansihe commented Oct 29, 2024

@mneudert This would be really useful for us, is there anything I could help with to potentially get this and the corresponding PR in the mmdb2 adapter merged?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants