You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
FLUX.1 Fill [dev] is a 12 billion parameter rectified flow transformer capable of filling areas in existing images based on a text description
here is how to call it through diffusers api:
importtorchfromdiffusersimportFluxFillPipelinefromdiffusers.utilsimportload_imageimg=load_image("/raid/yiyi/flux-new/assets/cup.png")
mask=load_image("/raid/yiyi/flux-new/assets/cup_mask.png")
repo_id="diffusers-internal-dev/dummy-fill"pipe=FluxFillPipeline.from_pretrained(repo_id, torch_dtype=torch.bfloat16)
pipe.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Remove this if you have enough GPU powerimage=pipe(
prompt="a white paper cup",
image=img,
mask_image=mask,
height=1632,
width=1232,
guidance_scale=30,
num_inference_steps=50,
max_sequence_length=512,
generator=torch.Generator("cpu").manual_seed(0)
).images[0]
image.save("yiyi_test_2_out.png")
If the quantification of this flux fill can be supported, it will be of considerable help.
The text was updated successfully, but these errors were encountered:
It seems that if you want to support formats like diffusers, you need to use deepcompressor to perform smooth Quantization on this part of the transformer model. However, due to different control conditions, the previous code designed for generate image condioned by prompt cannot be reused, so we need to modify the code of deepcompressor to support flux fill? What are your specific plans? Is there anything I can do to help?
huggingface/diffusers#9985
https://huggingface.co/black-forest-labs/FLUX.1-Fill-dev
FLUX.1 Fill [dev] is a 12 billion parameter rectified flow transformer capable of filling areas in existing images based on a text description
here is how to call it through diffusers api:
If the quantification of this flux fill can be supported, it will be of considerable help.
The text was updated successfully, but these errors were encountered: