You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The non-flash-attention modules in this repository seem to not be installable with AMD cards. I would be happy to help address this but need some guidance.
import rotary_emb returns ImportError: libc10.so: cannot open shared object file: No such file or directory.
If you import torch then import rotary_emb, the result is ImportError: /.../x86_miniconda3/envs/flash2/lib/python3.11/site-packages/rotary_emb.cpython-311-x86_64-linux-gnu.so: undefined symbol: _Z17apply_rotary_cudaN2at6TensorES0_S0_S0_S0_S0_b.
Configuration:
PyTorch 2.1.2
ROCM 5.6
MI250 GPU
The text was updated successfully, but these errors were encountered:
It looks like there are some CUDA operators in the rotary lib which is not working on AMD GPUs. Therefore you cannot use it directly without kernel supports even if you can build it.
The non-flash-attention modules in this repository seem to not be installable with AMD cards. I would be happy to help address this but need some guidance.
Progress:
Errors:
import rotary_emb
returnsImportError: libc10.so: cannot open shared object file: No such file or directory
.import torch
thenimport rotary_emb
, the result isImportError: /.../x86_miniconda3/envs/flash2/lib/python3.11/site-packages/rotary_emb.cpython-311-x86_64-linux-gnu.so: undefined symbol: _Z17apply_rotary_cudaN2at6TensorES0_S0_S0_S0_S0_b
.Configuration:
The text was updated successfully, but these errors were encountered: