New Release #3427
vladmandic
announced in
Announcements
New Release
#3427
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
SD.Next Release 2024-09-13
Just under two weeks since last SD.Next release, here's another update!
Highlights
Major refactor of FLUX.1 support:
try enabling NNCF for quantization/compression on-the-fly!
Few image related goodies...
Few video related goodies...
with support for text-to-video and video-to-video!
create video which travels between different prompts and at long video lengths!
And few other updates...
Plus tons of other items and fixes!
(part 1)
For more details see: Changelog | ReadMe | Wiki | Discord
Details
Major refactor of FLUX.1 support:
model load will load selected components first and then initialize model using pre-loaded components
components that were not pre-loaded will be downloaded and initialized as needed
as usual, components can also be loaded after initial model load
note: use of transformer/unet is recommended as those are flux.1 finetunes
note: manually selecting vae and text-encoder is not recommended
note: mix-and-match of different quantizations for different components can lead to unexpected errors
support for InstantX/Shakker-Labs models including Union-Pro
note that flux controlnet models are large, up to 6.6GB on top of already large base model!
as such, you may need to use offloading:sequential which is not as fast, but uses far less memory
when using union model, you must also select control mode in the control unit
flux does not yet support img2img so to use controlnet, you need to set contronet input via control unit override
not recommended due to massive duplication of components, but added due to popular demand
each such model is 20-32GB in size vs ~11GB for typical unet fine-tune
FLUX.1 can be used as refiner for other models such as sd/sdxl
simply load sd/sdxl model as base and flux model as refiner and use as usual refiner workflow
note flux may require higher denoising strength than typical sd/sdxl models
note: img2img is not yet supported with controlnet
this brings supported quants to: nf4/fp8/fp4/qint8/qint4
can speed up generate
enable via settings -> compute -> fused projections
Other improvements & Fixes:
create video which travels between different prompts at different steps!
example prompt:
allow for creation of much longer videos, automatically enabled if frames > 16
based on seam-carving
allows for img2img/inpaint even at massively different aspect ratios without distortions!
simply select as resize method when using img2img or control tabs
create hdr images from in multiple exposures by latent-space modifications during generation
use via scripts -> hdr
option save hdr images creates images in standard 8bit/channel (hdr-effect) and 16bit/channel (full-hdr) PNG format
ui result is always 8bit/channel hdr-effect image plus grid of original images used to create hdr
grid image can be disabled via settings -> user interface -> show grid
actual full-hdr image is not displayed in ui, only optionally saved to disk
using industry-standard .cube LUTs!
enable via scripts -> color-grading
not just limited width/height/scale
simply select in scripts -> prompt enhance
uses gokaygokay/Flux-Prompt-Enhance model
can be used to speed-up taesd decoding by reducing number of ops
e.g. if generating 1024px image, reducing layers by 1 will result in preview being 512px
set via settings -> live preview -> taesd decode layers
applies to any upscale options, including refine workflow
option:
extra_networks_fetch
enable/disable in settings -> networksused by flow-matching samplers to adjust between structure and details
improves quality of the flow-matching samplers
applies to all models that use t5 transformer
diffusers
from main branch, no longer tied to releaserequirements
Beta Was this translation helpful? Give feedback.
All reactions