-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Phi3Transformer does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet. #7
Comments
its happen because using old version or python, pytorch or CUDA, Update the custom node. it will automatically select between SDPA, eager now. |
updated whole comfy, restart, updated all custom nodes, restart, updated just the omnigen node, restart >> still the same error :( using normal comfy (not the portable), windows10 |
@123LiVo321 same here. updated all and getting same error " File "E:\pinokio\api\comfy.git\app\env\lib\site-packages\transformers\modeling_utils.py", line 1565, in _autoset_attn_implementation |
In case someone finds this issue, there's a new one with more info including a suggested workaround here: |
Phi3Transformer does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet. Please request the support for this architecture: huggingface/transformers#28005. If you believe this error is a bug, please open an issue in Transformers GitHub repository and load your model with the argument
attn_implementation="eager"
meanwhile. Example:model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="eager")
The text was updated successfully, but these errors were encountered: