-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How much GPU memory needed in general? #8
Comments
I failed with RTX 3090 in my lab, but surprisingly succeed with CPU mode. |
I also failed on 24GB (RTX 4090), but succeeded with RTX A6000 (48GB). The peak memory consumption during the run for example run |
Hi all, yes the example file is actually fairly large. I'll make a smaller one. We'll be adding an option today to lower memory consumption with bit of slowdown as tradeoff. Will report back here when it's on the main branch! |
Thanks for the prompt feedback. That would be great to add an option to adjust the memory consumption for the GPU with a low memory size! |
In my RTX 3090 takes about 11GB and it works fine |
My RTX4090, with 16GB seems to handle this fine:
Gives a usage level of this:
This is only running for a single protein seqeunce and ligand, but I imagine it would be fine with multiple chains and ligands. |
The example failed on a 24GB machine for me as well: https://instances.vantage.sh/aws/ec2/g5.2xlarge
|
What was the input and how long did it take? |
The weights are only 6.5gb but when predicting with CPU it uses up to 30GB RAM. What is taking so much memory? |
1 receptor and 1 ligand, maybe a quarter or a few quarters? |
Great job! Thanks for sharing the tool.
Do you have recommendations for the GPU memory to run a prediction? I was trying to run the prediction in the examples with 4090(24GB), but it failed with 'ran out of the memory'. Is it possible to run a prediction with boltz-1 by such GPU?
Thanks.
The text was updated successfully, but these errors were encountered: