Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot generate in Latent + Prompt mode #329

Open
elistys opened this issue May 26, 2024 · 3 comments
Open

Cannot generate in Latent + Prompt mode #329

elistys opened this issue May 26, 2024 · 3 comments

Comments

@elistys
Copy link

elistys commented May 26, 2024

When using Prompt mode, an image is generated with Attention, but with Latent, the following error occurs after the final step and the image is not generated. This is the same for both base prompts and common prompts.

RuntimeError: Input type (c10::Half) and bias type (float) should be the same.

However, if delete all LoRA (18 items) from the prompt field, generation will be successful even with Latent.

The launch options are as follows

Commandline arguments for webui.py, for example: export COMMANDLINE_ARGS="--medvram --opt-split-attention"

export COMMANDLINE_ARGS="--skip-torch-cuda-test --upcast-sampling --no-half-vae

I'm a non-technical end user.
Please let me know if you have any solution.

Environment
Web-UI version:A1111: v1.7.0
SD Version:1.5
Regional Prompter: 59d68e6
M2 Mac

@cerega66
Copy link

I also got this problem.
I tried to repeat the example from #221 (comment) and I get the same problem.

Screenshot of settings:
1

My log:

*** Error completing request  1.10s/it]
*** Arguments: ('task(g0lxpzh0snqqk4n)', <gradio.routes.Request object at 0x000001F733DD57B0>, 'anime illustration of masterpiece, beautiful 2girls, full-body,BREAK\nKonosuba aqua <lora:aqua_xl_v1:0.5> BREAK\nKusanagi Motoko ghost in the shell <lora:kusanagi_motoko:0.5>', '', [], 1, 1, 7, 1024, 1024, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 30, 'Euler a', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, True, False, 'Matrix', 'Columns', 'Mask', 'Prompt', '1,1', '0.2', False, True, False, 'Latent', [False], '0', '0', '0.4', None, '0', '0', False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, [], 30, '', 4, [], 1, '', '', '', '') {}
    Traceback (most recent call last):
      File "R:\SD_2_ui\stable-diffusion-webui-1.9.3\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "R:\SD_2_ui\stable-diffusion-webui-1.9.3\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "R:\SD_2_ui\stable-diffusion-webui-1.9.3\modules\txt2img.py", line 109, in txt2img
        processed = processing.process_images(p)
      File "R:\SD_2_ui\stable-diffusion-webui-1.9.3\modules\processing.py", line 845, in process_images
        res = process_images_inner(p)
      File "R:\SD_2_ui\stable-diffusion-webui-1.9.3\modules\processing.py", line 993, in process_images_inner
        x_samples_ddim = decode_latent_batch(p.sd_model, samples_ddim, target_device=devices.cpu, check_for_nans=True)
      File "R:\SD_2_ui\stable-diffusion-webui-1.9.3\modules\processing.py", line 633, in decode_latent_batch
        sample = decode_first_stage(model, batch[i:i + 1])[0]
      File "R:\SD_2_ui\stable-diffusion-webui-1.9.3\modules\sd_samplers_common.py", line 76, in decode_first_stage
        return samples_to_images_tensor(x, approx_index, model)
      File "R:\SD_2_ui\stable-diffusion-webui-1.9.3\modules\sd_samplers_common.py", line 58, in samples_to_images_tensor
        x_sample = model.decode_first_stage(sample.to(model.first_stage_model.dtype))
      File "R:\SD_2_ui\stable-diffusion-webui-1.9.3\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "R:\SD_2_ui\stable-diffusion-webui-1.9.3\repositories\generative-models\sgm\models\diffusion.py", line 121, in decode_first_stage
        out = self.first_stage_model.decode(z)
      File "R:\SD_2_ui\stable-diffusion-webui-1.9.3\repositories\generative-models\sgm\models\autoencoder.py", line 314, in decode
        z = self.post_quant_conv(z)
      File "R:\SD_2_ui\stable-diffusion-webui-1.9.3\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "R:\SD_2_ui\stable-diffusion-webui-1.9.3\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "R:\SD_2_ui\stable-diffusion-webui-1.9.3\extensions-builtin\Lora\networks.py", line 514, in network_Conv2d_forward
        return network_forward(self, input, originals.Conv2d_forward)
      File "R:\SD_2_ui\stable-diffusion-webui-1.9.3\extensions-builtin\Lora\networks.py", line 478, in network_forward
        y = original_forward(org_module, input)
      File "R:\SD_2_ui\stable-diffusion-webui-1.9.3\venv\lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
        return self._conv_forward(input, self.weight, self.bias)
      File "R:\SD_2_ui\stable-diffusion-webui-1.9.3\venv\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward
        return F.conv2d(input, weight, bias, self.stride,
    RuntimeError: Input type (struct c10::Half) and bias type (float) should be the same

---

Environment
Web-UI version:A1111: v1.9.3
COMMANDLINE_ARGS=--xformers --opt-sdp-attention --upcast-sampling --opt-channelslast --api --no-half-vae --ckpt-dir
SD Version:SDXL
Regional Prompter: 4802fac
Windows 10
Ryzen 5600x
RAM 64GB
RTX 3090

@Pevernow
Copy link

+1

@titlestad
Copy link

Any change of a fix?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants