forked from AUTOMATIC1111/stable-diffusion-webui
-
Notifications
You must be signed in to change notification settings - Fork 236
Open
Labels
Description
Checklist
- The issue exists after disabling all extensions
- The issue exists on a clean installation of webui
- The issue is caused by an extension, but I believe it is caused by a bug in the webui
- The issue exists in the current version of the webui
- The issue has not been reported before recently
- The issue has been reported before but has not been fixed yet
What happened?
Error after trying to generate an image
Steps to reproduce the problem
- execute webui.bat
- type a prompt
- get error RuntimeError: Input type (float) and bias type (struct c10::Half) should be the same
What should have happened?
WebUI should have generated an image, instead I got an error
What browsers do you use to access the UI ?
Other, Google Chrome
Sysinfo
Console logs
venv "C:\Users\Miguel\stable-diffusion-webui-amdgpu\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1-amd-51-ge61adddd
Commit hash: e61adddd295d3438036a87460cde6f437e26b559
ROCm: agents=['gfx803']
ROCm: version=None, using agent gfx803
ZLUDA support: experimental
Failed to install ZLUDA: 'NoneType' object is not subscriptable
Using CPU-only torch
W1216 02:45:44.677986 20024 venv\Lib\site-packages\torch\distributed\elastic\multiprocessing\redirects.py:29] NOTE: Redirects are currently not supported in Windows or MacOs.
C:\Users\Miguel\stable-diffusion-webui-amdgpu\venv\lib\site-packages\timm\models\layers\__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
C:\Users\Miguel\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
rank_zero_deprecation(
Launching Web UI with arguments:
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
C:\Users\Miguel\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\amp\autocast_mode.py:266: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
warnings.warn(
ONNX: version=1.23.0 provider=CPUExecutionProvider, available=['DmlExecutionProvider', 'CPUExecutionProvider']
Loading weights [6ce0161689] from C:\Users\Miguel\stable-diffusion-webui-amdgpu\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: C:\Users\Miguel\stable-diffusion-webui-amdgpu\configs\v1-inference.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 12.7s (prepare environment: 18.2s, initialize shared: 1.9s, load scripts: 0.5s, create ui: 0.6s, gradio launch: 0.3s).
Applying attention optimization: InvokeAI... done.
Model loaded in 17.4s (load weights from disk: 0.7s, create model: 0.7s, apply weights to model: 14.3s, apply half(): 0.4s, load VAE: 0.1s, calculate empty prompt: 1.0s).
0%| | 0/20 [00:00<?, ?it/s]
*** Error completing request
*** Arguments: ('task(t0m08iidph7n921)', <gradio.routes.Request object at 0x000001F8C4AD37F0>, 'dragon', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\modules\call_queue.py", line 74, in f
res = list(func(*args, **kwargs))
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\modules\call_queue.py", line 53, in f
res = func(*args, **kwargs)
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\modules\txt2img.py", line 109, in txt2img
processed = processing.process_images(p)
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\modules\processing.py", line 849, in process_images
res = process_images_inner(p)
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\modules\processing.py", line 1083, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\modules\processing.py", line 1441, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\modules\sd_samplers_kdiffusion.py", line 233, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\modules\sd_samplers_common.py", line 272, in launch_sampling
return func()
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\modules\sd_samplers_kdiffusion.py", line 233, in <lambda>
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\modules\sd_samplers_cfg_denoiser.py", line 249, in forward
x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\modules\sd_hijack_utils.py", line 22, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\modules\sd_hijack_utils.py", line 34, in __call__
return self.__sub_func(self.__orig_func, *args, **kwargs)
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\modules\sd_hijack_unet.py", line 50, in apply_model
result = orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs)
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\modules\sd_hijack_utils.py", line 22, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\modules\sd_hijack_utils.py", line 36, in __call__
return self.__orig_func(*args, **kwargs)
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
out = self.diffusion_model(x, t, context=cc)
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\modules\sd_unet.py", line 91, in UNetModel_forward
return original_forward(self, x, timesteps, context, *args, **kwargs)
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
h = module(h, emb, context)
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 86, in forward
x = layer(x)
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\extensions-builtin\Lora\networks.py", line 599, in network_Conv2d_forward
return originals.Conv2d_forward(self, input)
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\conv.py", line 554, in forward
return self._conv_forward(input, self.weight, self.bias)
File "C:\Users\Miguel\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\conv.py", line 549, in _conv_forward
return F.conv2d(
RuntimeError: Input type (float) and bias type (struct c10::Half) should be the same
---Additional information
No response