Skip to content

fix(qwenimage): support index_timestep_zero for Qwen-Image-Edit#790

Open
avan06 wants to merge 2 commits intonunchaku-ai:devfrom
avan06:qwen-image-edit-timestep-zero
Open

fix(qwenimage): support index_timestep_zero for Qwen-Image-Edit#790
avan06 wants to merge 2 commits intonunchaku-ai:devfrom
avan06:qwen-image-edit-timestep-zero

Conversation

@avan06
Copy link

@avan06 avan06 commented Jan 25, 2026

Motivation

When using editing-based models (specifically Qwen-Image-Edit-2511), the index_timestep_zero reference method is required for positive conditioning to maintain high output quality. Without this specialized handling, the model fails to correctly anchor the reference image at the zero-noise state, leading to significantly degraded output quality or artifacts.

Currently, the qwenimage.py implementation in ComfyUI-nunchaku lacks the internal logic to handle this method. When index_timestep_zero is enabled, ComfyUI doubles the batch size of the timestep embeddings (temb), but the Nunchaku transformer blocks do not account for this batch doubling. This results in:

  1. Incorrect Broadcasting: The hidden states (batch 1) are incorrectly broadcasted across the doubled temb (batch 2).
  2. Runtime Crash: The process eventually fails with RuntimeError: shape '[1, ...]' is invalid for input of size [403200] because the final output size is twice what the model's output reshaping expects.

This 'Shape is invalid' issue has also been reported by others in the Hugging Face community. You can refer to this discussion for more context: Shape '[1, 52, 78, 16, 2, 2]' is invalid for input of size 519168 when using index_timestep_zero to fix image quality

Modifications

fix(qwenimage): support index_timestep_zero for Qwen-Image-Edit

  • Implement timestep_zero_index handling in NunchakuQwenImageTransformerBlock.
  • Update _modulate to correctly split modulation parameters for doubled batches.
  • Ensure residual connections use split gates when Kontext reference method is used.
  • Fixes RuntimeError: 'shape is invalid for input of size...' during inference.

Testing

This modification has been verified in ComfyUI with the FluxKontextMultiReferenceLatentMethod node set to index_timestep_zero. The image generation task now completes successfully as expected.

Checklist

  • Code is formatted using Pre-Commit hooks (run pre-commit run --all-files).
  • Relevant unit tests are added in the tests/workflows directory following the guidance in the Contribution Guide.
  • Reference images are uploaded to PR comments and URLs are added to test_cases.json.
  • Additional test data (if needed) is registered in test_data/inputs.yaml.
  • Additional models (if needed) are registered in scripts/download_models.py and test_data/models.yaml.
  • Additional custom nodes (if needed) are added to .github/workflows/pr-test.yaml.
  • For reviewers: If you're only helping merge the main branch and haven't contributed code to this PR, please remove yourself as a co-author when merging.
  • Please feel free to join our Discord or WeChat to discuss your PR.

@lmxyy
Copy link
Collaborator

lmxyy commented Jan 25, 2026

Could you include a test as in contribution guide?

@avan06
Copy link
Author

avan06 commented Jan 25, 2026

Hi, since the Contribution Guide is written for Linux, I ran into some difficulties running it on Windows. I’ll find some time to check how to run it properly. For now, I’ve just run pre-commit. Thank you.

@avan06 avan06 force-pushed the qwen-image-edit-timestep-zero branch from fd73baf to 47c4cd8 Compare January 26, 2026 12:44
@avan06
Copy link
Author

avan06 commented Jan 26, 2026

Not sure what I'm doing, but I'm getting an AttributeError: function 'EmptyProcessWorkingSet' not found during the test.


(venv) G:\src\test-workspace>pytest -v tests/ -x -vv -k "nunchaku-qwen-image-edit-2509"
==================================================================================== test session starts ====================================================================================
platform win32 -- Python 3.12.10, pytest-9.0.2, pluggy-1.6.0 -- G:\src\ComfyUI-nunchaku-avan\venv\Scripts\python.exe
cachedir: .pytest_cache
rootdir: G:\src\test-workspace
plugins: anyio-4.12.1, jaxtyping-0.3.6, asyncio-1.3.0, rerunfailures-16.1
asyncio: mode=Mode.STRICT, debug=False, asyncio_default_fixture_loop_scope=None, asyncio_default_test_loop_scope=function
collected 25 items / 23 deselected / 2 selected

tests/test_workflows.py::test[nunchaku-qwen-image-edit-2509_0] FAILED [ 50%]

========================================================================================= FAILURES ==========================================================================================
___________________________________________________________________________ test[nunchaku-qwen-image-edit-2509_0] ___________________________________________________________________________

case = <tests.case.Case object at 0x0000021F18B11160>, client = <comfy.client.embedded_comfy_client.Comfy object at 0x0000021F1ACD3E90>

@pytest.mark.asyncio
@pytest.mark.parametrize("case", cases, ids=ids)
async def test(case: Case, client: Comfy):
    api_file = Path(__file__).parent / "workflows" / case.workflow_name / "api.json"
    # Read and parse the workflow file
    workflow = json.loads(api_file.read_text(encoding="utf8"))
    for key, value in case.inputs.items():
        set_nested_value(workflow, key, value)
    prompt = Prompt.validate(workflow)
    outputs = await client.queue_prompt(prompt)
    save_image_node_id = next(key for key in prompt if prompt[key].class_type == "SaveImage")
    path = outputs[save_image_node_id]["images"][0]["abs_path"]
    logger.info("Generated image path: %s", path)
  clip_iqa, lpips, psnr = compute_metrics(path, case.ref_image_url)
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

tests\test_workflows.py:53:


tests\utils.py:24: in compute_metrics
metric = CLIPImageQualityAssessment(model_name_or_path="openai/clip-vit-large-patch14").to("cuda")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
..\ComfyUI-nunchaku-avan\venv\Lib\site-packages\torchmetrics\multimodal\clip_iqa.py:198: in init
anchors = _clip_iqa_get_anchor_vectors(


model_name_or_path = 'openai/clip-vit-large-patch14'
model = CLIPModel(
(text_model): CLIPTextTransformer(
(embeddings): CLIPTextEmbeddings(
(token_embedding): Embedding(49408, 768)
(position_embedding): Embedding(77, 768)
)
(encoder): CLIPEncoder(
(layers): ModuleList(
(0-11): 12 x CLIPEncoderLayer(
(self_attn): CLIPAttention(
(k_proj): Linear(in_features=768, out_features=768, bias=True)
(v_proj): Linear(in_features=768, out_features=768, bias=True)
(q_proj): Linear(in_features=768, out_features=768, bias=True)
(out_proj): Linear(in_features=768, out_features=768, bias=True)
)
(layer_norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(mlp): CLIPMLP(
(activation_fn): QuickGELUActivation()
(fc1): Linear(in_features=768, out_features=3072, bias=True)
(fc2): Linear(in_features=3072, out_features=768, bias=True)
)
(layer_norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
)
)
)
(final_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
)
(vision_model): CLIPVisionTransformer(
(embeddings): CLIPVisionEmbeddings(
(patch_embedding): Conv2d(3, 1024, kernel_size=(14, 14), stride=(14, 14), bias=False)
(position_embedding): Embedding(257, 1024)
)
(pre_layrnorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
(encoder): CLIPEncoder(
(layers): ModuleList(
(0-23): 24 x CLIPEncoderLayer(
(self_attn): CLIPAttention(
(k_proj): Linear(in_features=1024, out_features=1024, bias=True)
(v_proj): Linear(in_features=1024, out_features=1024, bias=True)
(q_proj): Linear(in_features=1024, out_features=1024, bias=True)
(out_proj): Linear(in_features=1024, out_features=1024, bias=True)
)
(layer_norm1): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
(mlp): CLIPMLP(
(activation_fn): QuickGELUActivation()
(fc1): Linear(in_features=1024, out_features=4096, bias=True)
(fc2): Linear(in_features=4096, out_features=1024, bias=True)
)
(layer_norm2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
)
)
)
(post_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
)
(visual_projection): Linear(in_features=1024, out_features=768, bias=False)
(text_projection): Linear(in_features=768, out_features=768, bias=False)
)
processor = CLIPProcessor:

  • image_processor: CLIPImageProcessorFast {
    "crop_size": {
    "height": 224,
    "width": 224
    },
    "data_format": "channels_first",
    "do_center_crop": true,
    "do_convert_rgb": true,
    "do_normalize": true,
    "do_rescale": true,
    "do_resize": true,
    "image_mean": [
    0.48145466,
    0.4578275,
    0.40821073
    ],
    "image_processor_type": "CLIPImageProcessorFast",
    "image_std": [
    0.26862954,
    0.26130258,
    0.27577711
    ],
    "resample": 3,
    "rescale_factor": 0.00392156862745098,
    "size": {
    "shortest_edge": 224
    }
    }

  • tokenizer: CLIPTokenizer(name_or_path='openai/clip-vit-large-patch14', vocab_size=49408, model_max_length=77, padding_side='right', truncation_side='right', special_tokens={'bos_token': '<|startoftext|>', 'eos_token': '<|endoftext|>', 'unk_token': '<|endoftext|>', 'pad_token': '<|endoftext|>'}, added_tokens_decoder={
    49406: AddedToken("<|startoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True, special=True),
    49407: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
    }
    )

{
"image_processor": {
"crop_size": {
"height": 224,
"width": 224
},
"data_format": "channels_first",
"do_center_crop": true,
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "CLIPImageProcessorFast",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"shortest_edge": 224
}
},
"processor_class": "CLIPProcessor"
}

prompts_list = ['Good photo.', 'Bad photo.'], device = device(type='cpu')

def _clip_iqa_get_anchor_vectors(
    model_name_or_path: str,
    model: "_CLIPModel",
    processor: "_CLIPProcessor",
    prompts_list: list[str],
    device: Union[str, torch.device],
) -> Tensor:
    """Calculates the anchor vectors for the CLIP IQA metric.

    Args:
        model_name_or_path: string indicating the version of the CLIP model to use.
        model: The CLIP model
        processor: The CLIP processor
        prompts_list: A list of prompts
        device: The device to use for the calculation

    """
    if model_name_or_path == "clip_iqa":
        text_processed = processor(text=prompts_list)
        anchors_text = torch.zeros(
            len(prompts_list), processor.tokenizer.model_max_length, dtype=torch.long, device=device
        )
        for i, tp in enumerate(text_processed["input_ids"]):
            anchors_text[i, : len(tp)] = torch.tensor(tp, dtype=torch.long, device=device)

        anchors = model.encode_text(anchors_text).float()
    else:
        text_processed = processor(text=prompts_list, return_tensors="pt", padding=True)
        anchors = model.get_text_features(
            text_processed["input_ids"].to(device), text_processed["attention_mask"].to(device)
        )
  return anchors / anchors.norm(p=2, dim=-1, keepdim=True)
                     ^^^^^^^^^^^^

E AttributeError: 'BaseModelOutputWithPooling' object has no attribute 'norm'

..\ComfyUI-nunchaku-avan\venv\Lib\site-packages\torchmetrics\functional\multimodal\clip_iqa.py:178: AttributeError
----------------------------------------------------------------------------------- Captured stdout setup -----------------------------------------------------------------------------------
Downloading robot.png from https://huggingface.co/datasets/nunchaku-tech/test-data/resolve/main/inputs/robot.png...
Downloading logo.png from https://huggingface.co/datasets/nunchaku-tech/test-data/resolve/main/inputs/logo.png...
Downloading mushroom_depth.webp from https://huggingface.co/datasets/nunchaku-tech/test-data/resolve/main/inputs/mushroom_depth.webp...
Downloading lecun.jpg from https://huggingface.co/datasets/nunchaku-tech/test-data/resolve/main/inputs/lecun.jpg...
Downloading masked_strawberry.png from https://huggingface.co/datasets/nunchaku-tech/test-data/resolve/main/inputs/masked_strawberry.png...
Downloading removal.png from https://huggingface.co/datasets/nunchaku-tech/test-data/resolve/main/inputs/removal.png...
Downloading yarn-art-pikachu.png from https://huggingface.co/datasets/nunchaku-tech/test-data/resolve/main/inputs/yarn-art-pikachu.png...
Downloading comfy_poster.png from https://huggingface.co/datasets/nunchaku-tech/test-data/resolve/main/inputs/comfy_poster.png...
Downloading puppy.png from https://huggingface.co/datasets/nunchaku-tech/test-data/resolve/main/inputs/puppy.png...
Downloading man.png from https://huggingface.co/datasets/nunchaku-tech/test-data/resolve/main/inputs/man.png...
Downloading sofa.png from https://huggingface.co/datasets/nunchaku-tech/test-data/resolve/main/inputs/sofa.png...
----------------------------------------------------------------------------------- Captured stdout call ------------------------------------------------------------------------------------
'nunchaku_versions.json' not found. Node will start in minimal mode. Use 'update node' to fetch versions.
----------------------------------------------------------------------------------- Captured stderr call ------------------------------------------------------------------------------------
2026-01-26 22:31:34 [INFO] [comfyui_nunchaku] [init.py:53] ======================================== ComfyUI-nunchaku Initialization ========================================
2026-01-26 22:31:34 [INFO] [comfyui_nunchaku] [init.py:59] Nunchaku version: 1.2.1
2026-01-26 22:31:34 [INFO] [comfyui_nunchaku] [init.py:60] ComfyUI-nunchaku version: 1.2.0
2026-01-26 22:31:35 [INFO] [comfyui_nunchaku] [init.py:166] =================================================================================================================
[comfyui_controlnet_aux] | INFO -> Using ckpts path: G:\src\test-workspace\custom_nodes\comfyui_controlnet_aux\ckpts
[comfyui_controlnet_aux] | INFO -> Using symlinks: False
[comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider']
2026-01-26 22:31:35 [INFO] [root] [attention.py:587] Using pytorch attention
2026-01-26 22:31:35 [INFO] [RES4LYF] [vanilla_node_importing.py:42] (RES4LYF) Init
2026-01-26 22:31:35 [INFO] [RES4LYF] [vanilla_node_importing.py:42] (RES4LYF) Importing beta samplers.
2026-01-26 22:31:35 [INFO] [RES4LYF] [vanilla_node_importing.py:42] (RES4LYF) Importing legacy samplers.
2026-01-26 22:31:48 [INFO] [httpx] [_client.py:1025] HTTP Request: HEAD https://huggingface.co/nunchaku-tech/nunchaku-qwen-image-edit-2509/resolve/main/svdq-int4_r32-qwen-image-edit-2509.safetensors "HTTP/1.1 307 Temporary Redirect"
2026-01-26 22:31:48 [INFO] [httpx] [_client.py:1025] HTTP Request: HEAD https://huggingface.co/nunchaku-ai/nunchaku-qwen-image-edit-2509/resolve/main/svdq-int4_r32-qwen-image-edit-2509.safetensors "HTTP/1.1 302 Found"
2026-01-26 22:31:49 [INFO] [httpx] [_client.py:1025] HTTP Request: GET https://huggingface.co/api/models/nunchaku-ai/nunchaku-qwen-image-edit-2509/xet-read-token/e93a5fb77403d02a5a73c7cc8707b292c6ebc659 "HTTP/1.1 200 OK"
2026-01-26 22:36:33 [INFO] [comfyui_nunchaku.nodes.models.qwenimage] [qwenimage.py:210] Enabling CPU offload
2026-01-26 22:36:33 [WARNING] [comfy.supported_models_base] [supported_models_base.py:127]
WARNING, you accessed scaled_fp8 from the model config object which doesn't exist. Please fix your code.

2026-01-26 22:45:29 [INFO] [tests.test_workflows] [test_workflows.py:51] Generated image path: G:\src\test-workspace\output\ComfyUI_00002_.png
2026-01-26 22:45:30 [INFO] [httpx] [_client.py:1025] HTTP Request: HEAD https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/config.json "HTTP/1.1 307 Temporary Redirect"
2026-01-26 22:45:30 [INFO] [httpx] [_client.py:1025] HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/openai/clip-vit-large-patch14/32bd64288804d66eefd0ccbe215aa642df71cc41/config.json "HTTP/1.1 200 OK"
2026-01-26 22:45:30 [INFO] [httpx] [_client.py:1025] HTTP Request: HEAD https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/config.json "HTTP/1.1 307 Temporary Redirect"
2026-01-26 22:45:30 [INFO] [httpx] [_client.py:1025] HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/openai/clip-vit-large-patch14/32bd64288804d66eefd0ccbe215aa642df71cc41/config.json "HTTP/1.1 200 OK"
2026-01-26 22:45:30 [INFO] [httpx] [_client.py:1025] HTTP Request: HEAD https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/model.safetensors "HTTP/1.1 302 Found"
Loading weights: 100%|██████████| 590/590 [00:00<00:00, 5416.07it/s, Materializing param=visual_projection.weight]
CLIPModel LOAD REPORT from: openai/clip-vit-large-patch14
Key | Status | |
-------------------------------------+------------+--+-
vision_model.embeddings.position_ids | UNEXPECTED | |
text_model.embeddings.position_ids | UNEXPECTED | |

Notes:

INFO tests.test_workflows:test_workflows.py:51 Generated image path: G:\src\test-workspace\output\ComfyUI_00002_.png
INFO httpx:_client.py:1025 HTTP Request: HEAD https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/config.json "HTTP/1.1 307 Temporary Redirect"
INFO httpx:_client.py:1025 HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/openai/clip-vit-large-patch14/32bd64288804d66eefd0ccbe215aa642df71cc41/config.json "HTTP/1.1 200 OK"
INFO httpx:_client.py:1025 HTTP Request: HEAD https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/config.json "HTTP/1.1 307 Temporary Redirect"
INFO httpx:_client.py:1025 HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/openai/clip-vit-large-patch14/32bd64288804d66eefd0ccbe215aa642df71cc41/config.json "HTTP/1.1 200 OK"
INFO httpx:_client.py:1025 HTTP Request: HEAD https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/model.safetensors "HTTP/1.1 302 Found"
INFO httpx:_client.py:1025 HTTP Request: GET https://huggingface.co/api/models/openai/clip-vit-large-patch14/tree/main/additional_chat_templates?recursive=false&expand=false "HTTP/1.1 404 Not Found"
INFO httpx:_client.py:1025 HTTP Request: HEAD https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/processor_config.json "HTTP/1.1 404 Not Found"
INFO httpx:_client.py:1025 HTTP Request: HEAD https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/chat_template.json "HTTP/1.1 404 Not Found"
INFO httpx:_client.py:1025 HTTP Request: HEAD https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/chat_template.jinja "HTTP/1.1 404 Not Found"
INFO httpx:_client.py:1025 HTTP Request: HEAD https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/audio_tokenizer_config.json "HTTP/1.1 404 Not Found"
INFO httpx:_client.py:1025 HTTP Request: HEAD https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/processor_config.json "HTTP/1.1 404 Not Found"
INFO httpx:_client.py:1025 HTTP Request: HEAD https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/preprocessor_config.json "HTTP/1.1 307 Temporary Redirect"
INFO httpx:_client.py:1025 HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/openai/clip-vit-large-patch14/32bd64288804d66eefd0ccbe215aa642df71cc41/preprocessor_config.json "HTTP/1.1 200 OK"
INFO httpx:_client.py:1025 HTTP Request: HEAD https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/processor_config.json "HTTP/1.1 404 Not Found"
INFO httpx:_client.py:1025 HTTP Request: HEAD https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/preprocessor_config.json "HTTP/1.1 307 Temporary Redirect"
INFO httpx:_client.py:1025 HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/openai/clip-vit-large-patch14/32bd64288804d66eefd0ccbe215aa642df71cc41/preprocessor_config.json "HTTP/1.1 200 OK"
INFO httpx:_client.py:1025 HTTP Request: HEAD https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/config.json "HTTP/1.1 307 Temporary Redirect"
INFO httpx:_client.py:1025 HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/openai/clip-vit-large-patch14/32bd64288804d66eefd0ccbe215aa642df71cc41/config.json "HTTP/1.1 200 OK"
INFO httpx:_client.py:1025 HTTP Request: HEAD https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/tokenizer_config.json "HTTP/1.1 307 Temporary Redirect"
INFO httpx:client.py:1025 HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/openai/clip-vit-large-patch14/32bd64288804d66eefd0ccbe215aa642df71cc41/tokenizer_config.json "HTTP/1.1 200 OK"
INFO httpx:client.py:1025 HTTP Request: GET https://huggingface.co/api/models/openai/clip-vit-large-patch14/tree/main/additional_chat_templates?recursive=false&expand=false "HTTP/1.1 404 Not Found"
INFO httpx:client.py:1025 HTTP Request: GET https://huggingface.co/api/models/openai/clip-vit-large-patch14/tree/main?recursive=true&expand=false "HTTP/1.1 200 OK"
--------------------------------------------------------------------------------- Captured stderr teardown ----------------------------------------------------------------------------------
2026-01-26 22:45:37 [WARNING] [comfy.model_management] [model_management.py:698] failed to trim
Traceback (most recent call last):
File "G:\src\ComfyUI-nunchaku-avan\venv\Lib\site-packages\comfy\model_management.py", line 689, in trim_memory
EmptyProcessWorkingSet = kernel32.EmptyProcessWorkingSet
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\avan\AppData\Local\Programs\Python\Python312\Lib\ctypes_init
.py", line 392, in getattr
func = self.getitem(name)
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\avan\AppData\Local\Programs\Python\Python312\Lib\ctypes_init
.py", line 397, in getitem
func = self.FuncPtr((name_or_ordinal, self))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: function 'EmptyProcessWorkingSet' not found
----------------------------------------------------------------------------------- Captured log teardown -----------------------------------------------------------------------------------
WARNING comfy.model_management:model_management.py:698 failed to trim
Traceback (most recent call last):
File "G:\src\ComfyUI-nunchaku-avan\venv\Lib\site-packages\comfy\model_management.py", line 689, in trim_memory
EmptyProcessWorkingSet = kernel32.EmptyProcessWorkingSet
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\avan\AppData\Local\Programs\Python\Python312\Lib\ctypes_init
.py", line 392, in getattr
func = self.getitem(name)
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\avan\AppData\Local\Programs\Python\Python312\Lib\ctypes_init
.py", line 397, in getitem
func = self._FuncPtr((name_or_ordinal, self))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: function 'EmptyProcessWorkingSet' not found
===================================================================================== warnings summary ======================================================================================
:488
:488: DeprecationWarning: Type google._upb._message.MessageMapContainer uses PyType_Spec with a metaclass that has custom tp_new. This is deprecated and will no longer be allowed in Python 3.14.

:488
:488: DeprecationWarning: Type google._upb._message.ScalarMapContainer uses PyType_Spec with a metaclass that has custom tp_new. This is deprecated and will no longer be allowed in Python 3.14.

:488
:488: DeprecationWarning: builtin type SwigPyPacked has no module attribute

:488
:488: DeprecationWarning: builtin type SwigPyObject has no module attribute

tests/test_workflows.py::test[nunchaku-qwen-image-edit-2509_0]
G:\src\ComfyUI-nunchaku-avan\venv\Lib\site-packages\torch\jit_script.py:1480: DeprecationWarning: torch.jit.script is deprecated. Please switch to torch.compile or torch.export.
warnings.warn(

tests/test_workflows.py::test[nunchaku-qwen-image-edit-2509_0]
G:\src\ComfyUI-nunchaku-avan\venv\Lib\site-packages\timm\models\layers_init_.py:49: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {name} is deprecated, please import via timm.layers", FutureWarning)

tests/test_workflows.py::test[nunchaku-qwen-image-edit-2509_0]
G:\src\test-workspace\custom_nodes\comfyui_controlnet_aux\src\custom_controlnet_aux\dwpose\body.py:5: DeprecationWarning: Please import gaussian_filter from the scipy.ndimage namespace; the scipy.ndimage.filters namespace is deprecated and will be removed in SciPy 2.0.0.
from scipy.ndimage.filters import gaussian_filter

tests/test_workflows.py::test[nunchaku-qwen-image-edit-2509_0]
G:\src\test-workspace\custom_nodes\comfyui_controlnet_aux\src\custom_controlnet_aux\dwpose\hand.py:6: DeprecationWarning: Please import gaussian_filter from the scipy.ndimage namespace; the scipy.ndimage.filters namespace is deprecated and will be removed in SciPy 2.0.0.
from scipy.ndimage.filters import gaussian_filter

tests/test_workflows.py::test[nunchaku-qwen-image-edit-2509_0]
G:\src\test-workspace\custom_nodes\comfyui_controlnet_aux\node_wrappers\dwpose.py:26: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly
warnings.warn("DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly")

tests/test_workflows.py::test[nunchaku-qwen-image-edit-2509_0]
G:\src\ComfyUI-nunchaku-avan\venv\Lib\site-packages\transformers\models\qwen2\tokenization_qwen2.py:62: DeprecationWarning: Deprecated in 0.9.0: BPE.init will not create from files anymore, try BPE.from_file instead
BPE(

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
================================================================================== short test summary info ==================================================================================
FAILED tests/test_workflows.py::test[nunchaku-qwen-image-edit-2509_0] - AttributeError: 'BaseModelOutputWithPooling' object has no attribute 'norm'
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
================================================================= 1 failed, 23 deselected, 10 warnings in 870.55s (0:14:30) =================================================================
sys:1: DeprecationWarning: builtin type swigvarlink has no module attribute

- Implement timestep_zero_index handling in NunchakuQwenImageTransformerBlock.
- Update _modulate to correctly split modulation parameters for doubled batches.
- Ensure residual connections use split gates when Kontext reference method is used.
- Fixes RuntimeError: 'shape is invalid for input of size...' during inference.
- Trim trailing whitespace
- Reformat qwenimage.py with black-jupyter
@avan06 avan06 force-pushed the qwen-image-edit-timestep-zero branch from 47c4cd8 to ea6f272 Compare February 7, 2026 05:36
@mholtgraewe
Copy link

Could we please get this PR merged? I've been using it for over 2 weeks without any issues, and it fixes a serious problem that is a showstopper preventing the use of Qwen-Image-Edit-2511 with Nunchaku.

@zwukong
Copy link

zwukong commented Mar 13, 2026

确实可以用,感谢

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants