A ComfyUI custom node for loading and applying LoRA (Low-Rank Adaptation) to Nunchaku Qwen Image and Z-ImageTurbo models, and diffsynth ControlNet functionality. ComfyUI Nodes 2.0 compatible.
Currently under development and testing. Debug logs are being output extensively. This does not affect functionality.
Latest release: v2.2.5 on GitHub Releases
⚠️ Note for v2.0+ users: If you encounterTypeError: got multiple values for argument 'guidance'errors, see troubleshooting section below.
This LoRA loader was extracted and modified from GavChap's fork:
- Original Fork: GavChap/ComfyUI-nunchaku (qwen-lora-suport-standalone branch)
- Extraction: LoRA functionality was extracted from the full fork to create an independent custom node
- Integration: Modified to work with the official ComfyUI-nunchaku plugin
For detailed technical explanation, see v2.2.0 Release Notes
As of v2.0, diffsynth ControlNet is now fully supported for Nunchaku Qwen Image models, Z Image Turbo BF16.safetensors, and Nunchaku Z Image Turbo models.
A new dedicated node NunchakuQI&ZITDiffsynthControlnet enables diffsynth ControlNet functionality with Nunchaku quantized Qwen Image models, Z Image Turbo BF16.safetensors, and Nunchaku Z Image Turbo models.
- ✅ New Node:
NunchakuQI&ZITDiffsynthControlnet- Dedicated diffsynth ControlNet loader for Nunchaku Qwen Image models, Z Image Turbo BF16.safetensors, and Nunchaku Z Image Turbo models - ✅ Full ControlNet Support: Works with standard diffsynth ControlNet models
- ✅ Seamless Integration: Automatically applies ControlNet patches during model forward pass
- ✅ Backward Compatible: All existing LoRA functionality remains unchanged
For detailed technical explanation, see v2.0 Release Notes
For installation instructions, features, and requirements, see Installation Guide.
If you have v1.57 or earlier installed with integration code in ComfyUI-nunchaku's __init__.py, see UPGRADE_GUIDE_V1.57.md for detailed upgrade instructions.
- NunchakuQwenImageLoraLoader: Single LoRA loader
- NunchakuQwenImageLoraStack: Multi LoRA stacker with dynamic UI (Legacy)
- NunchakuQwenImageLoraStackV2: Multi LoRA stacker with dynamic UI - ComfyUI Nodes 2.0 (Beta) compatible
- NunchakuQwenImageLoraStackV3: Multi LoRA stacker with dynamic UI - ComfyUI Nodes 2.0 (Beta) compatible
- NunchakuZImageTurboLoraStackV3: Z-Image-Turbo LoRA stacker with dynamic UI - ComfyUI Nodes 2.0 (Beta) compatible
- NunchakuQI&ZITDiffsynthControlnet: Diffsynth ControlNet loader for Nunchaku Qwen Image models, Z Image Turbo BF16.safetensors, and Nunchaku Z Image Turbo models (v2.0)
For Nunchaku Qwen Image models:
- Load your Nunchaku Qwen Image model using
Nunchaku Qwen Image DiT Loader - Add either
NunchakuQwenImageLoraLoaderorNunchakuQwenImageLoraStacknode - Select your LoRA file and set the strength
- Connect to your workflow
For Nunchaku Z-Image-Turbo models:
- Load your Nunchaku Z-Image-Turbo model using
Nunchaku Z-Image DiT Loader - Add
Nunchaku Z-Image-Turbo LoRA Stack V3node - Select your LoRA file and set the strength
- Connect to your workflow
The NunchakuQwenImageLoraStack and NunchakuZImageTurboLoraStackV3 nodes automatically adjust the number of visible LoRA slots based on the lora_count parameter (1-10).
- Load your diffsynth ControlNet model patch using
Model Patch Loaderfrom ComfyUI-NunchakuFluxLoraStacker - The
Model Patch Loader(ModelPatchLoaderCustom) supports CPU offload, allowing you to load ControlNet patches to CPU memory to save VRAM - Connect the
MODEL_PATCHoutput to themodel_patchinput ofNunchakuQI&ZITDiffsynthControlnetnode - Connect your Nunchaku Qwen Image model, VAE, and control image
- Set the ControlNet strength and connect to your workflow
- Easy Installation: Simple git clone installation
- Independent Operation: No integration code required (v1.60+)
- Automatic Node Discovery: ComfyUI automatically loads the custom node
- Error Handling: Comprehensive error checking and user feedback
- Issue #1 Fixed: Resolved ComfyUI\custom_nodes not found error with improved path detection (thanks to @mcv1234's solution)
- Issue #2 Fixed: Fixed UTF-8 encoding error causing
SyntaxError: (unicode error)by using dedicated Python script for proper UTF-8 encoding (thanks to @AHEKOT's bug report) - Issue #3 Fixed (v1.4.0): Resolved Node break cached progress error by implementing proper IS_CHANGED method with hash-based change detection (thanks to @AHEKOT's bug report)
- Issue #10 Fixed: Added portable ComfyUI support with embedded Python detection (Issue #10) - Special Thanks: This crucial feature was suggested by @vvhitevvizard, who identified the need for embedded Python support in portable ComfyUI installations. Without this suggestion, portable ComfyUI users would not have been able to use this LoRA loader.
- ComfyUI
- ComfyUI-nunchaku plugin (official version, no modification required)
- PyTorch
- Python 3.11+
This node is designed to work with:
- ComfyUI-nunchaku plugin (official version)
- Nunchaku Qwen Image models
- Standard ComfyUI workflows
Problem: This error occurs when ComfyUI tries to load the LoRA loader nodes but fails due to import issues.
Error Message: ValueError: attempted relative import with no known parent package
Root Cause: The error was caused by using relative imports (from ...wrappers) in the LoRA loader code. Relative imports only work when the module is loaded as part of a package. However, ComfyUI-nunchaku loads the module directly using importlib.util, which bypasses package initialization. As a result, Python cannot resolve the relative import paths.
Solution: Fixed in v1.5.0 by changing relative imports to absolute imports:
- Before:
from ...wrappers.qwenimage import ComfyQwenImageWrapper - After:
from wrappers.qwenimage import ComfyQwenImageWrapper
How to Fix: This error has been fixed in v1.5.0. Simply update to the latest version and restart ComfyUI.
Technical Details:
- The installation script adds
ComfyUI-QwenImageLoraLoadertosys.path - This allows absolute imports to work correctly
- The absolute import
from wrappers.qwenimage importresolves toComfyUI-QwenImageLoraLoader/wrappers/qwenimage.py
Problem: After installation, the LoRA loader nodes don't appear in ComfyUI.
Solution:
- Restart ComfyUI completely (close all instances)
- Check the ComfyUI console for error messages
- Make sure both
ComfyUI-nunchakuandComfyUI-QwenImageLoraLoaderare in yourComfyUI/custom_nodesdirectory - Check that your ComfyUI-nunchaku version is compatible
Problem: The nunchaku package is not installed.
Solution:
- Install ComfyUI-nunchaku plugin from the official repository
- Follow the nunchaku installation instructions to install the nunchaku wheel
- Restart ComfyUI
- Status:
⚠️ Environment Dependent - May require ComfyUI core fixes
For detailed information, see COMFYUI_0.4.0_MODEL_MANAGEMENT_ERRORS.md.
- Related Issues:
- Issue #25 -
AttributeError: 'NunchakuModelPatcher' object has no attribute 'pinned'and deepcopy errors withmodel_config - Issue #33 -
AttributeError: 'NoneType' object has no attribute 'to'into_safelymethod (Fixed in v2.1.0) - ComfyUI Issue #6590:
'NoneType' object has no attribute 'shape' - ComfyUI Issue #6600:
'NoneType' object is not callable(Loader-related) - ComfyUI Issue #6532: Crash after referencing models after model unload
- Issue #25 -
- Issue Link: Issue #30
- Status:
⚠️ May Still Occur in Some Environments - Even with v2.0.8 fixes - Issue:
TypeError: got multiple values for argument 'guidance'error may still occur in some user environments when using v2.0+ versions with diffsynth ControlNet support, despite multiple fixes applied from v2.0.2 to v2.0.8. - Root Cause: v2.0+ versions include diffsynth ControlNet support, which requires complex argument handling between ComfyUI's scheduler patches, external patches (e.g., ComfyUI-EulerDiscreteScheduler), and the QwenImageTransformer2DModel.forward signature. Even with multiple layers of defense (exclusion logic in both
forwardand_execute_modelmethods), some edge cases in certain environments may still cause argument duplication. - Solution for Affected Users: If you continue to experience
TypeError: got multiple values for argument 'guidance'errors with v2.0+ versions even after updating to v2.0.8, please use v1.72 instead, which does not include diffsynth ControlNet support and therefore avoids these argument passing complexities.- v1.72 Release: v1.72 Release
- Note: v1.72 is the latest v1.x release before v2.0+ diffsynth ControlNet support was added. If you don't need diffsynth ControlNet functionality, v1.72 provides stable LoRA loading without the argument passing complexities introduced in v2.0+.
- Related Issues:
- Issue #32 -
TypeError: got multiple values for argument 'guidance'error when using LoRA with KSampler
- Issue #32 -
Problem: This error occurs when using Qwen-Edit models or Z-Image-Turbo models with an outdated diffusers library version.
Error Message: ModuleNotFoundError: No module named 'diffusers.models.transformers.transformer_z_image'
Root Cause: The most likely cause is that the diffusers library version is too old and does not include the transformer_z_image module, which is required for Z-Image-Turbo model support. When ComfyUI-nunchaku's model loader tries to load Z-Image-Turbo models (or Qwen-Edit models that may be detected as Z-Image format), it attempts to import this module, but it doesn't exist in older diffusers versions. This module was added in a later version of diffusers to support Z-Image-Turbo models.
Solution: Update the diffusers library to the latest version:
If using a virtual environment (venv):
pip install --upgrade diffusersIf using ComfyUI's embedded Python:
ComfyUI\python_embeded\python.exe -m pip install --upgrade diffusersHow to Verify: After updating, restart ComfyUI and try loading your model again. The error should be resolved.
Related Issues: Issue #38, Issue #40
- Status: ❌ Not Supported
- Issue: LoRAs in LoKR format (created by Lycoris) are not supported.
- Important Note: This limitation applies specifically to Nunchaku quantization models. LoKR format LoRAs may work with standard (non-quantized) Qwen Image models, but this node is designed for Nunchaku models only.
- LoKR weights are automatically skipped when detected (experimental conversion code is disabled).
- Converting to Standard LoRA using SVD approximation (via external tools or scripts) has also been tested and found to result in noise/artifacts when applied to Nunchaku quantization models.
- Conclusion: At this time, we have not found a way to successfully apply LoKR weights to Nunchaku models. Please use Standard LoRA formats.
- Supported Formats:
- ✅ Standard LoRA (Rank-Decomposed):
- Supported weight keys:
lora_up.weight/lora_down.weightlora.up.weight/lora.down.weightlora_A.weight/lora_B.weightlora.A.weight/lora.B.weight
- These are the standard formats produced by Kohya-ss, Diffusers, and most training scripts.
- Supported weight keys:
- ❌ LoKR (Lycoris): Not supported (Keys like
lokr_w1,lokr_w2) - ❌ LoHa: Not supported (Keys like
hada_w1,hada_w2) - ❌ IA3: Not supported
- ✅ Standard LoRA (Rank-Decomposed):
- Related Issues:
- Issue #29 - LyCORIS / LoKr Qwen Image LoRA not recognized by ComfyUI
- Status: ✅ Fixed in ComfyUI-nunchaku v1.0.2
- Issue: Device mismatch errors occurred when using RES4LYF sampler with LoRA (Issue #7, Issue #8)
- Fix: The issue was fixed in ComfyUI-nunchaku v1.0.2 by @devgdovg in PR #600. This fix was implemented in ComfyUI-nunchaku's codebase, not in this LoRA loader.
- Requirement: Update to ComfyUI-nunchaku v1.0.2 or later to use RES4LYF sampler with LoRA
- Related Issues:
- Fixed: Repository recovery - All updates after v2.0.8 were completely broken, and recovery work has been performed to restore all functionality. Related to Issue #39
- Recovery Details: Restored all deleted files (images/, nodes/, wrappers/, nunchaku_code/, js/, md/, LICENSE, pyproject.toml) from local backups
- Feature Verification: Verified and restored all features from v2.0.8 through v2.2.4 (NextDiT support, AWQ skip logic, toggle buttons, LoRA format detection)
- Technical Details: See v2.2.5 Release Notes for complete explanation
- Added: AWQ modulation layer detection and skip logic -
img_mod.1andtxt_mod.1layers are detected and LoRA application is skipped by default to prevent noise. Can be overridden withQWENIMAGE_LORA_APPLY_AWQ_MOD=1environment variable. - Removed:
NunchakuZImageTurboLoraStackV2node registration has been removed from ComfyUI node list to avoid confusion when using official Nunchaku Z-Image loader. The node file remains in the repository but is no longer registered. Users of the official loader should useNunchakuZImageTurboLoraStackV3instead. (Issue #37) - Technical Details: See v2.2.4 Release Notes for complete explanation
- Added: Toggle buttons to enable/disable individual LoRA slots and all LoRAs at once. Resolved Issue #12 and Issue #36
⚠️ DEVELOPMENT STATUS: These features are currently experimental implementations for theNunchakuZImageTurboLoraStackV3node only. ComfyUI Nodes 2.0 environment only. With current technical capabilities, it is not possible to fully implement all requested features in JavaScript.- Technical Details: See v2.2.3 Release Notes for complete explanation
- Added: Diffsynth ControlNet support for Nunchaku Z-ImageTurbo models
- Technical Details: See v2.2.2 Release Notes for complete explanation
- Added: NunchakuZImageTurboLoraStackV3 node – Z-Image-Turbo LoRA stacker with dynamic UI for official Nunchaku Z-Image loader
- Technical Details: See v2.2.0 Release Notes for complete explanation
- Fixed: ComfyUI v0.6.0+ compatibility – Migrated from
guidancetoadditional_t_condparameter in_execute_modelmethod to support ComfyUI v0.6.0+ API changes (PR #34) - Technical Details: See v2.1.1 Release Notes for complete explanation
- Fixed: Resolved Issue #33 – Fixed
AttributeError: 'NoneType' object has no attribute 'to'by adding None checks toto_safelyandforwardmethods inComfyQwenImageWrapper - Technical Details: See v2.1.0 Release Notes for complete explanation
- v2.0.8 Fixed: Resolved Issue #30 – Fixed
TypeError: got multiple values for argument 'guidance'error by adding final cleanup of kwargs before calling model forward- Technical Details: See v2.0.8 Release Notes for complete explanation
- v2.0.7 Fixed: Enhanced Issue #32 fix by adding exclusion processing in
forwardmethod in addition to_execute_modelmethod to prevent duplicate argument errors- Technical Details: See v2.0.7 Release Notes for complete explanation
- v2.0.6 Fixed: Excluded
ref_latents,transformer_options, andattention_maskfrom kwargs to prevent duplicate argument errors- Technical Details: See v2.0.6 Release Notes for complete explanation
- v2.0.5 Fixed: Resolved Issue #32 – Fixed
TypeError: got multiple values for argument 'guidance'error by passing guidance as positional argument to match QwenImageTransformer2DModel.forward signature- Technical Details: See v2.0.5 Release Notes for complete explanation
- v2.0.4 Fixed: Resolved Issue #32 – Fixed
TypeError: got multiple values for argument 'guidance'error by removing guidance from transformer_options- Technical Details: See v2.0.4 Release Notes for complete explanation
- v2.0.3 Fixed: Resolved Issue #31 – Fixed nodes not appearing when
comfy.ldm.lumina.controlnetmodule is unavailable- Technical Details: See v2.0.3 Release Notes for complete explanation
- v2.0.2 Fixed: Resolved Issue #30 and Issue #32 – Fixed
TypeError: got multiple values for argument 'guidance'error when using LoRA with KSampler- Technical Details: See v2.0.2 Release Notes for complete explanation
- MAJOR UPDATE: Added diffsynth ControlNet support for Nunchaku Qwen Image models
- New Node:
NunchakuQI&ZITDiffsynthControlnet- Enables diffsynth ControlNet to work with Nunchaku quantized Qwen Image models, Z Image Turbo BF16.safetensors, and Nunchaku Z Image Turbo models - Features:
- Full diffsynth ControlNet functionality for Nunchaku Qwen Image models
- Automatic patch registration and application
- Technical Details: See v2.0 Release Notes for complete explanation
- Fixed: Resolved compatibility issue with kjai node updates – Added default value
"disable"forcpu_offloadparameter in LoRA loader methods (PR #28) - Reported by: @enternalsaga (PR #28)
- Technical Details: See v1.72 Release Notes for complete explanation
- Fixed: Resolved Issue #27 – Fixed indentation error on line 882 in
lora_qwen.pycausingSyntaxError: expected an indented block after 'else' statement(reported by @youyin400c-cpu) - Attempted Fix: Addressed Issue #25 –
AttributeError: 'NunchakuModelPatcher' object has no attribute 'pinned'and deepcopy errors withmodel_config - Reported by: @LacklusterOpsec (Issue #25)
- Current Status:
⚠️ This error does not occur in our stable ComfyUI environment - The fix was implemented based on the reported issue, but we cannot guarantee it will completely resolve the issue as we cannot reproduce it in our environment. If you encounter this error, please report with your ComfyUI version and environment details. - Technical Details: See v1.71 Release Notes for complete explanation
- Added: V2 loader with ComfyUI Nodes 2.0 (Beta) support
- New Node:
NunchakuQwenImageLoraStackV2- V2 loader node added - Fixed: Resolved Issue #9 – The 10th LoRA control row no longer displays when
lora_countis set to less than 10. Dynamic UI now correctly hides unused LoRA slots and adjusts node height automatically - Features:
- Full compatibility with ComfyUI Nodes 2.0 (Beta)
- Complete feature parity with V1 implementation
- Dynamic UI for adjusting slot count
- Automatic node height adjustment
- Technical Details: See v1.70 Release Notes for complete explanation
- Fixed: Addressed Issue #21 – User-configurable CPU offload setting
- Problem: CPU offload setting was hardcoded to
"auto", causing unnecessary slowdowns when VRAM was sufficient - Solution: Added
cpu_offloadparameter toINPUT_TYPESallowing users to select from["auto", "enable", "disable"]with default"disable"for performance - Technical Details: See v1.63 Release Notes for complete explanation
- Attempted Fix: Addressed Issue #14 – Multi-stage workflow cache not resetting when LoRAs change
- Problem: Cache was not being reset when switching between different LoRA sets in multi-stage workflows, causing incorrect results
- Solution Attempted: Cache invalidation logic was added to reset cache when LoRAs change
- Current Status:
⚠️ Issue is still not fully resolved - The fix was implemented but the problem persists in some multi-stage workflow scenarios - Technical Details: See v1.62 Release Notes for complete explanation
- MAJOR UPDATE: Removed ComfyUI-nunchaku integration requirement - now a fully independent custom node
- Simplified Installation: No batch scripts or manual file editing needed - just
git cloneand restart - Cleaner Architecture: Node registration happens automatically via ComfyUI's built-in mechanism
- Backward Compatible: All existing LoRA files and workflows continue to work
- Technical Details: See v1.60 Release Notes for complete explanation
- Full release notes: https://github.com/ussoewwin/ComfyUI-QwenImageLoraLoader/releases/tag/v1.60
For detailed release notes from v1.0.0 to v1.57, please see RELEASE_NOTES_V1.0.0_TO_V1.57.md.
This document contains comprehensive information about all bug fixes, features, and technical details for earlier versions of the project.
This project is licensed under the MIT License.



