tensor shape mismatching for SD3 basic pipeline with xformers #9681
-
Hi, I'm new to import torch
from diffusers import StableDiffusion3Pipeline
path = "/mnt/data"
pipe = StableDiffusion3Pipeline.from_pretrained(path, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
pipe.enable_xformers_memory_efficient_attention()
image = pipe(
"A cat holding a sign that says Hello World",
negative_prompt="",
num_inference_steps=28,
guidance_scale=7.0,
).images[0]
image.save("./output/x.png") The error msg:
After removing this line pipe.enable_xformers_memory_efficient_attention() everything would work perfectly as expected. I'm not sure if |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
I think we will be removing xformers support going forward in a future release, so support for it is not guaranteed to work out-of-the-box with newer models. This does look xformers-specific so I suspect it is due to us not having a SD3-specific xformers attention processor. It should be quite simple to create your own and use it by modifying the code here. Once you have created the attention processor, you can set it using: |
Beta Was this translation helpful? Give feedback.
I think we will be removing xformers support going forward in a future release, so support for it is not guaranteed to work out-of-the-box with newer models. This does look xformers-specific so I suspect it is due to us not having a SD3-specific xformers attention processor. It should be quite simple to create your own and use it by modifying the code here. Once you have created the attention processor, you can set it using:
pipe.transformer.set_attn_processor(MyCustomXformersSD3AttnProcessor())