Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Invalid Buffer Size/Mac #14

Open
jadolfbr opened this issue Nov 18, 2024 · 5 comments
Open

Invalid Buffer Size/Mac #14

jadolfbr opened this issue Nov 18, 2024 · 5 comments

Comments

@jadolfbr
Copy link

I am attempting to try this on Mac (M1) as I didn't see if this would work out of the box. I am also installing on Linux but figured I would try it. Error is below. Has this been tested on a Mac? It does seem to find the mps 'gpu'.

File "/Users/jadolfbr/miniforge3/envs/pytorch_m1/lib/python3.10/site-packages/boltz/model/layers/triangular_attention/attention.py", line 143, in forward
x = self.mha(
File "/Users/jadolfbr/miniforge3/envs/pytorch_m1/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Users/jadolfbr/miniforge3/envs/pytorch_m1/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/jadolfbr/miniforge3/envs/pytorch_m1/lib/python3.10/site-packages/boltz/model/layers/triangular_attention/primitives.py", line 502, in forward
o = _attention(q, k, v, biases)
File "/Users/jadolfbr/miniforge3/envs/pytorch_m1/lib/python3.10/site-packages/boltz/model/layers/triangular_attention/primitives.py", line 213, in _attention
a = torch.matmul(query, key)
RuntimeError: Invalid buffer size: 9.02 GB

@jwohlwend
Copy link
Owner

Hi! Thanks for reporting this. We have not tested on M1 GPU's, but is it possible it's running out of GPU memory here? We'll try to run some tests on our side!

@jadolfbr
Copy link
Author

I think it was a memory issue as well (ChatGPT thought so too). I have 16 GB memory, so it is a bit strange. This was the example. Though, I just ran on our cluster and this type of instance, https://instances.vantage.sh/aws/ec2/g5.2xlarge , (24GB GPU Memory) and it ran out of memory as well. I will try something larger in the meantime.

@jwohlwend
Copy link
Owner

Which input are you trying? We'll be releasing a low memory mode soon!

@jadolfbr
Copy link
Author

Input is the example. It's probably Memory - I couldn't run the example on a 24GB GPU and a 32 GB CPU. Running now on a bigger CPU machine. Will try out the new memory modes when they release.

@gcorso
Copy link
Collaborator

gcorso commented Nov 29, 2024

The chunking code is now live in version 0.3.0 @jadolfbr! Let us know if this works better for you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants