Skip to content

Conversation

@Cor-net
Copy link

@Cor-net Cor-net commented Dec 11, 2025

Summary

Fixes LoRA weight accumulation in complex workflows (e.g. multiple NunchakuFluxLoraLoader nodes or dynamic LoRA switching in UI).

This version:

  • ✅ Removes unsafe copy.deepcopy(model)fixes the __setstate__ crash (TypeError: 'NoneType' object is not callable, AttributeError: 'pinned') that occurs in ComfyUI v0.4.0+ when deepcopying a LoRA-modified ModelPatcher.
  • ✅ Introduces nunchaku_base_model — a clean clone stored on first use — to ensure all LoRA applications start from the same base.
  • ✅ Explicitly resets transformer LoRA state (reset_lora(), comfy_lora_meta_list = [], comfy_lora_sd_list = []).
  • ✅ Removes duplicate LoRA entries by path before applying new ones.

Testing

Fully verified in real-world scenarios:

  • ✅ Single NunchakuFluxLoraLoader (e.g. Turbo LoRA),
  • ✅ Two chained loaders (TurboStyle), with strength changes and LoRA swaps,
  • NunchakuFluxLoraStack,
  • ✅ Dynamic LoRA switching in UI → no cached generations, no noise after 5+ changes.

🔍 Note on workflow behavior: In ComfyUI’s DAG execution, changing only the second LoRA node does not re-execute the first (cached). Full isolation is guaranteed when using a single LoraStack or when the entire chain is re-triggered (e.g. via new seed). The node-level fix ensures correctness regardless of workflow topology.

Fixes #716
Tested on: ComfyUI v0.4.0, ComfyUI frontend v1.34.8, Windows 11, CUDA 13.0.

@Cor-net Cor-net force-pushed the fix-flux-lora-leakage branch from 7d10207 to cf4444c Compare December 12, 2025 14:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug] 'NoneType' object is not callable

1 participant