forked from AUTOMATIC1111/stable-diffusion-webui
-
Notifications
You must be signed in to change notification settings - Fork 236
Open
Description
Checklist
- The issue exists after disabling all extensions
- The issue exists on a clean installation of webui
- The issue is caused by an extension, but I believe it is caused by a bug in the webui
- The issue exists in the current version of the webui
- The issue has not been reported before recently
- The issue has been reported before but has not been fixed yet
What happened?
Error after trying to generate an image
Steps to reproduce the problem
execute webui.bat
type a prompt
get error RuntimeError: Input type (float) and bias type (struct c10::Half) should be the same
What should have happened?
WebUI should have generated an image, instead I got an error
What browsers do you use to access the UI ?
Google Chrome
Sysinfo
Console logs
D:\Stable Diffusion AI\stable-diffusion-webui-directml>git pull
Already up to date.
venv "D:\Stable Diffusion AI\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1-amd-52-g920f4249
Commit hash: 920f42497fb7d11a7f89a70b2e9ac6604bc4d961
FATAL: No ROCm agent was found. Please make sure that graphics driver is installed and up to date.
W1222 20:19:15.903916 2080 venv\Lib\site-packages\torch\distributed\elastic\multiprocessing\redirects.py:29] NOTE: Redirects are currently not supported in Windows or MacOs.
D:\Stable Diffusion AI\stable-diffusion-webui-directml\venv\lib\site-packages\timm\models\layers\__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
D:\Stable Diffusion AI\stable-diffusion-webui-directml\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
rank_zero_deprecation(
Launching Web UI with arguments: --opt-sub-quad-attention --lowvram --disable-nan-check --theme dark
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
D:\Stable Diffusion AI\stable-diffusion-webui-directml\venv\lib\site-packages\torch\amp\autocast_mode.py:266: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
warnings.warn(
ONNX: version=1.23.2 provider=CPUExecutionProvider, available=['AzureExecutionProvider', 'CPUExecutionProvider']
Loading weights [6ce0161689] from D:\Stable Diffusion AI\stable-diffusion-webui-directml\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: D:\Stable Diffusion AI\stable-diffusion-webui-directml\configs\v1-inference.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 25.8s (prepare environment: 37.4s, initialize shared: 3.7s, load scripts: 1.0s, initialize extra networks: 0.4s, create ui: 1.0s, gradio launch: 0.5s).
Applying attention optimization: sub-quadratic... done.
Model loaded in 30.2s (load weights from disk: 1.6s, create model: 1.9s, apply weights to model: 21.7s, apply half(): 4.0s, hijack: 0.1s, calculate empty prompt: 0.7s).
0%| | 0/20 [00:00<?, ?it/s]
*** Error completing request
*** Arguments: ('task(esk5okrtw6keytt)', <gradio.routes.Request object at 0x0000028DDEB85E70>, 'man', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\modules\call_queue.py", line 74, in f
res = list(func(*args, **kwargs))
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\modules\call_queue.py", line 53, in f
res = func(*args, **kwargs)
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\modules\txt2img.py", line 109, in txt2img
processed = processing.process_images(p)
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\modules\processing.py", line 849, in process_images
res = process_images_inner(p)
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\modules\processing.py", line 1083, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\modules\processing.py", line 1441, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 233, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\modules\sd_samplers_common.py", line 272, in launch_sampling
return func()
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 233, in <lambda>
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\venv\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\modules\sd_samplers_cfg_denoiser.py", line 249, in forward
x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 22, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 34, in __call__
return self.__sub_func(self.__orig_func, *args, **kwargs)
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\modules\sd_hijack_unet.py", line 50, in apply_model
result = orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs)
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 22, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 36, in __call__
return self.__orig_func(*args, **kwargs)
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
out = self.diffusion_model(x, t, context=cc)
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\modules\sd_unet.py", line 91, in UNetModel_forward
return original_forward(self, x, timesteps, context, *args, **kwargs)
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
h = module(h, emb, context)
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1857, in _call_impl
return inner()
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1805, in inner
result = forward_call(*args, **kwargs)
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 86, in forward
x = layer(x)
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\extensions-builtin\Lora\networks.py", line 599, in network_Conv2d_forward
return originals.Conv2d_forward(self, input)
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\conv.py", line 554, in forward
return self._conv_forward(input, self.weight, self.bias)
File "D:\Stable Diffusion AI\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\conv.py", line 549, in _conv_forward
return F.conv2d(
RuntimeError: Input type (float) and bias type (struct c10::Half) should be the same
---Additional information
i new user in this platform, idk about averything, if this problem can be solved, explain to me
Metadata
Metadata
Assignees
Labels
No labels