-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What is this problem? #6
Comments
I have the same issue - On both Omnigen nodes available. Not sure how to fix ? |
What version of Python, PyTorch and CUDA are you using? |
I found the fix: Go to \ComfyUI_windows_portable\ComfyUI\models\LLM\OmniGen-v1 worked for me hope it helps |
I've updated the custom node to support the eager attention implementation, so simply updating the custom node in comfyui-manager will resolve the issue. This issue occurred due to using older versions of Python, PyTorch, or CUDA that don’t support the newer scaled_dot_product_attention (SDPA). With the eager implementation now in place, your setup should work without further issues. For optimal performance and to fully benefit from SDPA, I recommend updating your Python, PyTorch, and CUDA versions when possible. |
Thank you! Will update all. |
Phi3Transformer does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet. Please request the support for this architecture: huggingface/transformers#28005. If you believe this error is a bug, please open an issue in Transformers GitHub repository and load your model with the argument
attn_implementation="eager"
meanwhile. Example:model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="eager")
What is this problem?
The text was updated successfully, but these errors were encountered: