Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Phi3Transformer does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet. #7

Closed
zxtmzxtm opened this issue Nov 6, 2024 · 4 comments

Comments

@zxtmzxtm
Copy link

zxtmzxtm commented Nov 6, 2024

Phi3Transformer does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet. Please request the support for this architecture: huggingface/transformers#28005. If you believe this error is a bug, please open an issue in Transformers GitHub repository and load your model with the argument attn_implementation="eager" meanwhile. Example: model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="eager")

@1038lab
Copy link
Owner

1038lab commented Nov 7, 2024

its happen because using old version or python, pytorch or CUDA, Update the custom node. it will automatically select between SDPA, eager now.

@1038lab 1038lab closed this as completed Nov 7, 2024
@123LiVo321
Copy link

123LiVo321 commented Nov 11, 2024

updated whole comfy, restart, updated all custom nodes, restart, updated just the omnigen node, restart >> still the same error :(

using normal comfy (not the portable), windows10

@Jams63
Copy link

Jams63 commented Nov 11, 2024

@123LiVo321 same here. updated all and getting same error " File "E:\pinokio\api\comfy.git\app\env\lib\site-packages\transformers\modeling_utils.py", line 1565, in _autoset_attn_implementation
config = cls._check_and_enable_sdpa(
File "E:\pinokio\api\comfy.git\app\env\lib\site-packages\transformers\modeling_utils.py", line 1731, in _check_and_enable_sdpa
raise ValueError(
ValueError: Phi3Transformer does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet. Please request the support for this architecture: huggingface/transformers#28005. If you believe this error is a bug, please open an issue in Transformers GitHub repository and load your model with the argument attn_implementation="eager" meanwhile. Example: model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="eager") "

@dstults
Copy link

dstults commented Nov 24, 2024

In case someone finds this issue, there's a new one with more info including a suggested workaround here:
#23 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants