-
-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running with local optimized installation #15
Comments
It should totally be possible! there is a plan to eventually move to CompVis-based support. Like you said this should also allow different samplers. If you are interested in tackling a migration, I recommend starting a branch and going with CompVis support as it will be the basis for anything else. From there we can branch out and look to support more optimized methods--or even start our own optimizations... if you end up starting this, feel free to open a draft PR with some basic work and we can discuss details further! |
let me know if you would like me to assign you to this task 👍 |
Sure, I can try that. Let me know and I can put something together for us to start :) |
Currently, the most useful memory-optimized version is this one: |
Ohh? 👀👀👀 |
With --lowvram --opt-split-attention, it fits into about 1.2GB VRAM for a 512x512 render. I think it can even run on Kepler if you fiddle with older pytorch versions. But the inference in lowvram more is several times slower and underutilizes the GPU, --medvram --opt-split-attention is advised. Having something like this in Krita would be great. |
That other krita plugin sadly doesn't work with automatic1111 original webui using gradio api. So it would be absolutely fantastic to have a krita plugin that can hook into the gradio api on sd-webui and / or automatic1111 webui. |
First of all, great work with the plugin! It really makes the process of working with img2img more frictionless and enjoyable. Thanks for the effort!
I'd like to ask if you think that a option to run with a local installation (not diffusers-based, like the default one from CompVis) would be feasible on the current state of the plugin.
I ask because my local setup is pretty limited (GTX 1050ti) but I've manage to run the optimized version without issues on my machine (which is basically like the original one but with the precision of the inference set at half value, some options on this may be basujinda optimized version, Waifu Diffusion or hlky one) and I'd like to continue using it as the backend for koi (and, as it also have options to other sample methods, I think it would solve the #6 issue as well, and also make it possible to use offline).
If you think that's possible, even with some work from my side, I'm willing to try. Just let me know :)
The text was updated successfully, but these errors were encountered: