fix: prevent LoRA weight accumulation in Flux loader #718
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
Fixes LoRA weight accumulation in complex workflows (e.g. multiple
NunchakuFluxLoraLoadernodes or dynamic LoRA switching in UI).This version:
copy.deepcopy(model)— fixes the__setstate__crash (TypeError: 'NoneType' object is not callable,AttributeError: 'pinned') that occurs in ComfyUI v0.4.0+ when deepcopying a LoRA-modifiedModelPatcher.nunchaku_base_model— a clean clone stored on first use — to ensure all LoRA applications start from the same base.reset_lora(),comfy_lora_meta_list = [],comfy_lora_sd_list = []).Testing
Fully verified in real-world scenarios:
NunchakuFluxLoraLoader(e.g. Turbo LoRA),Turbo→Style), with strength changes and LoRA swaps,NunchakuFluxLoraStack,Fixes #716
Tested on: ComfyUI v0.4.0, ComfyUI frontend v1.34.8, Windows 11, CUDA 13.0.