-
Notifications
You must be signed in to change notification settings - Fork 422
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Adopt FluxPipeline to support a Flux models #114
Comments
If you want to save VRAM anyway, wouldn't it be better to use a model like this that uses BitsandBytes? https://huggingface.co/lllyasviel/flux1-dev-bnb-nf4/blob/main/flux1-dev-bnb-nf4-v2.safetensors |
Jesus, just check the response rate. Isn't it obvious that this is a dead horse? |
Omost may be a frozen project, but it is perfect for Flux - due to the fact how flux handles complex promts from Omost. @lllyasviel thanks for your amazing work, it would be great to see something like Omost with Flux. |
Check out https://github.com/instantX-research/Regional-Prompting-FLUX, we have implemented Regional Prompting for FLUX, it should be able to be used together with Omost! |
There is a new promising model.
https://huggingface.co/docs/diffusers/main/en/api/pipelines/flux
It would be good to take this model pipeline from diffusers package and add a canvas and attention / transformer additions from StableDiffusionXLOmostPipeline if possible.
Also, please take a note that there is a reduced T5 model available to save some VRAM
https://huggingface.co/comfyanonymous/flux_text_encoders/tree/main
The text was updated successfully, but these errors were encountered: