-
Notifications
You must be signed in to change notification settings - Fork 131
feat: Update qwen to support in loras version 1.0.0 (standalone) #665
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…y nunchaku update)
|
@GavChap How can I install it? Can you provide a tutorial |
|
Looking forward to the official release of the merger |
It's way easier on Linux than windows, but the general gist is: In the current
If you already have the nunchaku comfyui nodes installed, otherwise the easiest way is download the source .zip https://github.com/GavChap/ComfyUI-nunchaku/archive/refs/heads/qwen-lora-suport-standalone.zip and unzip that into a directory into ComfyUI/custom_nodes/ComfyUI-nunchaku (though it doesn't really matter what directory name, you just have to make sure it's a sub directory of custom_nodes) Then run You also need to have the latest release (1.0.0) of Nunchaku installed via a wheel as you would for getting Qwen running anyway. |
I am using version 1.0.1, but calling Lora does not work |
I tried multiple Loras, but the generated images still have the same effect as Qwen Image |
It should log to the ComfyUI log any errors it has loading loras. Can you link to one that doesn't work? I've tested it with Lightning loras and several realism loras and they all seem to work fine. |
@zhangyi90825-tech I believe I've now fixed it, please do another pull to update it. |
thank! I am already using it and the effect is very good But there are several bugs, 1. After switching Lora multiple times, the graphics memory will be overloaded, causing the graphics card driver to restart. 2. When using Nunkaku Qwen Image and switching back to Nunkaku Flux to generate images, it will crash |
|
Works pretty good for me and loads most keys, but it did not work when I had CPU offloading enabled. I got this error: RuntimeError: The size of tensor a (128) must match the size of tensor b (160) at non-singleton dimension 1 Gemini managed to fix this issue by disabling and re-enabling the offloader in the forward function of the ComfyQwenImageWrapper class of wrappers/qwenimage.py like so: And now it works great with CPU offloading. I don't know if this is the best solution, but it works for me. |
|
@zhangyi90825-tech When it crashes, please paste the log from comfyui in here |
Okay! I will send it here next time |
@MarkShark2 Thanks for that, I've included that fix. |
|
'🔥 - 18 Nodes not included in prompt but is activated' |
|
@GavChap This appeared for me |
Where it states "Could not find/apply LoRA to 120 modules" it doesn't seem to mean much as the lora is still applied to the right number of modules and the right weights. However I have noticed that it detects as model Flux, which is weird. I'll look in to that |
|
Yes, the model is correctly loaded. I tested it, and it's roughly the same as the original version with LoRA. |
It seems to be certain lora trainers that create layers that shouldn't be there, because some of mine trained with AI toolkit don't have any errors about layers and work perfectly. |
|
You're right. I loaded two LoRAs: one is Lightning's acceleration LoRA, which runs perfectly, and the other one I downloaded randomly. |
As an aside, apparently that's correct for ComfyUI, Qwen is a "Flux" model type. |
|
Great to finally see a workable solution—thanks for your effort, and we hope Dr. Li can take some time to review it! change to |
…ra are going to cause a VRAM allocation issue and add safety margin parameter.
|
@GavChap 【Applied LoRA compositions to 480 modules. |
@cazs521 what lora is it? Can you link me to it? |
|
An error will be reported when batch size > 1, and wrappers/qwenimage.py needs to be modified |
The insufficient image style is due to the lack of steps, CFG, and accelerated models. |
…ones with all underscores, and stop yelling about unused blocks as it's fine.
I'm getting this error when trying to load a LoRA for Qwen using the Nunchaku FLUX LoRA Loader node. `ot prompt Prompt executed in 8.72 seconds` |
Yes, because you need to use the new Nunchaku Qwen Lora Loader / Lora Stack instead. |
@zhangyi90825-tech Can you link me to the lora so I can test? Also if you can upload an image with your workflow (with the node disabled) that'd help, also I think you're on an older version, as your dit loader doesn't have the extra field for vram auto-switching to cpu offload |
thank! After I update the new version, I can use it |
|
https://github.com/ussoewwin/ComfyUI-QwenImageLoraLoader is also an alternative |
83c670a to
e45f9b6
Compare
# Conflicts: # __init__.py # nodes/models/flux.py # nodes/models/qwenimage.py
EXPERIMENTAL Expect OOM Credit: nunchaku-tech/ComfyUI-nunchaku#665
|
RAM and dimension FIX: |





Updates only the ComfyUI package to support Qwen Loras without updating nunchaku version from 1.0.0.
Gives the following new nodes:
These may be daisy-chained in order to load more.
Built upon the work of:
nunchaku-tech/nunchaku#680
and
nunchaku-tech/nunchaku#754
In my tests this loads almost all the loras I've tried with the correct weighting so images come out very well.
Issues known:
Doesn't work with cpu offloading