-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Full model always needed? #29
Comments
It is not necessary. We will avoid loading it in the next release. |
Can we delete the original BFL flux models after the pipeline has loaded? Or are they required for some part of the quantization setup/execution? |
I could close this but might be good to leave up for others who are wondering until the next release? :) |
Probably not for now. Everytime you load the model, it will redownload the model again. |
Confirmed, yes it does :S Great thanks for your work! It's quite impressive! |
Is the full model needed before adding the quantization? It would be nice if it wasn't but maybe it's hard to avoid.
At the moment the full model is downloaded when the pipeline is loading even thought I have already prepared the quantized model locally.
The text was updated successfully, but these errors were encountered: