-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Its not working for me #14
Comments
update the custom node |
I get the same error, just installed today, tried manager install, and manual git install, I have latest version of comfyui, all is updated and still doesnt work |
I got rid of that error if i used the update_comfyui_and_python_dependencies.bat from the update folder on comfyui. Now i got an torch.OutOfMemoryError: Allocation on device because I dont have enoguh vram on this pc but I will try on another one. |
what's error message you got, |
This model is somewhat VRAM-intensive, so it is recommended to limit the input image size to 512 or 768 pixels to reduce VRAM usage. Setting the output resolution to 1024x1024 should work fine, and the results will run as expected. Additionally, there is a significant difference in VRAM consumption when loading one image versus two, which may require further optimization. |
Phi3Transformer does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet. Please request the support for this architecture: huggingface/transformers#28005. If you believe this error is a bug, please open an issue in Transformers GitHub repository and load your model with the argument
attn_implementation="eager"
meanwhile. Example:model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="eager")
The text was updated successfully, but these errors were encountered: