Skip to content

[Bug] qwen image 2509 error #734

@vvhitevvizard

Description

@vvhitevvizard

Checklist

Describe the Bug

I use comfyui + nunchaku fp4 qwen image edit 2509 model. It was working before but recently it stopped.
I get this error:

Using pytorch attention in VAE VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16 Requested to load WanVAE loaded completely; 11349.59 MB usable, 242.03 MB loaded, full load: True Found quantization metadata version 1 Using MixedPrecisionOps for text encoder CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16 Requested to load QwenImageTEModel_ loaded completely; 13258.65 MB usable, 7910.29 MB loaded, full load: True Enabling CPU offload WARNING, you accessed scaled_fp8 from the model config object which doesn't exist. Please fix your code. model weight dtype torch.bfloat16, manual cast: None model_type FLUX Requested to load NunchakuQwenImage 0%| | 0/4 [00:00<?, ?it/s] !!! Exception during processing !!! 'list' object has no attribute 'dtype' Traceback (most recent call last): File "D:_ai\ComfyUI\execution.py", line 516, in execute output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:_ai\ComfyUI\execution.py", line 330, in get_output_data return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:_ai\ComfyUI\custom_nodes\comfyui-lora-manager\py\metadata_collector\metadata_hook.py", line 165, in async_map_node_over_list_with_metadata results = await original_map_node_over_list( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<2 lines>... ) ^ File "D:_ai\ComfyUI\execution.py", line 304, in _async_map_node_over_list await process_inputs(input_dict, i) File "D:_ai\ComfyUI\execution.py", line 292, in process_inputs result = f(**inputs) File "D:_ai\ComfyUI\nodes.py", line 1538, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) File "D:_ai\ComfyUI\nodes.py", line 1505, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, noise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) File "D:_ai\ComfyUI\comfy\sample.py", line 60, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) File "D:_ai\ComfyUI\comfy\samplers.py", line 1178, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) File "D:_ai\ComfyUI\comfy\samplers.py", line 1068, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:_ai\ComfyUI\comfy\samplers.py", line 1050, in sample output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes) File "D:_ai\ComfyUI\comfy\patcher_extension.py", line 112, in execute return self.original(*args, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "D:_ai\ComfyUI\comfy\samplers.py", line 994, in outer_sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes) File "D:_ai\ComfyUI\comfy\samplers.py", line 980, in inner_sample samples = executor.execute(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) File "D:_ai\ComfyUI\comfy\patcher_extension.py", line 112, in execute return self.original(*args, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "D:_ai\ComfyUI\comfy\samplers.py", line 752, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) File "D:_ai\python_embeded\Lib\torch_cu\torch\utils_contextlib.py", line 120, in decorate_context return func(*args, **kwargs) File "D:_ai\ComfyUI\comfy\k_diffusion\sampling.py", line 199, in sample_euler denoised = model(x, sigma_hat * s_in, **extra_args) File "D:_ai\ComfyUI\comfy\samplers.py", line 401, in call out = self.inner_model(x, sigma, model_options=model_options, seed=seed) File "D:_ai\ComfyUI\comfy\samplers.py", line 953, in call return self.outer_predict_noise(*args, **kwargs) ~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "D:_ai\ComfyUI\comfy\samplers.py", line 960, in outer_predict_noise ).execute(x, timestep, model_options, seed) ~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:_ai\ComfyUI\comfy\patcher_extension.py", line 112, in execute return self.original(*args, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "D:_ai\ComfyUI\comfy\samplers.py", line 963, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) File "D:_ai\ComfyUI\comfy\samplers.py", line 381, in sampling_function out = calc_cond_batch(model, conds, x, timestep, model_options) File "D:_ai\ComfyUI\comfy\samplers.py", line 206, in calc_cond_batch return _calc_cond_batch_outer(model, conds, x_in, timestep, model_options) File "D:_ai\ComfyUI\comfy\samplers.py", line 214, in _calc_cond_batch_outer return executor.execute(model, conds, x_in, timestep, model_options) ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:_ai\ComfyUI\comfy\patcher_extension.py", line 112, in execute return self.original(*args, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "D:_ai\ComfyUI\comfy\samplers.py", line 326, in calc_cond_batch output = model.apply_model(input_x, timestep, **c).chunk(batch_chunks) ~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:_ai\ComfyUI\comfy\model_base.py", line 162, in apply_model return comfy.patcher_extension.WrapperExecutor.new_class_executor( ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ...<2 lines>... comfy.patcher_extension.get_all_wrappers(comfy.patcher_extension.WrappersMP.APPLY_MODEL, transformer_options) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ).execute(x, t, c_concat, c_crossattn, control, transformer_options, **kwargs) ~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:_ai\ComfyUI\comfy\patcher_extension.py", line 112, in execute return self.original(*args, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "D:_ai\ComfyUI\comfy\model_base.py", line 204, in _apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds) File "D:_ai\python_embeded\Lib\torch_cu\torch\nn\modules\module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "D:_ai\python_embeded\Lib\torch_cu\torch\nn\modules\module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "D:_ai\ComfyUI\comfy\ldm\qwen_image\model.py", line 411, in forward return comfy.patcher_extension.WrapperExecutor.new_class_executor( ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ...<2 lines>... comfy.patcher_extension.get_all_wrappers(comfy.patcher_extension.WrappersMP.DIFFUSION_MODEL, transformer_options) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ).execute(x, timestep, context, attention_mask, ref_latents, additional_t_cond, transformer_options, **kwargs) ~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:_ai\ComfyUI\comfy\patcher_extension.py", line 112, in execute return self.original(*args, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "D:_ai\ComfyUI\custom_nodes\ComfyUI-nunchaku\models\qwenimage.py", line 726, in _forward else self.time_text_embed(timestep, guidance, hidden_states) ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:_ai\python_embeded\Lib\torch_cu\torch\nn\modules\module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "D:_ai\python_embeded\Lib\torch_cu\torch\nn\modules\module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "D:_ai\ComfyUI\comfy\ldm\qwen_image\model.py", line 81, in forward timesteps_emb = self.timestep_embedder(timesteps_proj.to(dtype=hidden_states.dtype)) ^^^^^^^^^^^^^^^^^^^ AttributeError: 'list' object has no attribute 'dtype' Prompt executed in 28.31 seconds

Environment

Windows 11, python 13, Torch 2.9, Cu130

Reproduction Steps

the error is reproduced using the default nuchaku's workflow

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions