Flux Support #1176
Replies: 68 comments 67 replies
-
|
I assumed that I need to download the node Anyline of TheMistoAI And x-flux-comfyui of the Xlabs-AI |
Beta Was this translation helpful? Give feedback.
-
|
Is it possible to use the --fast flag from Comfy in the plugin too? I have a 40x series card. |
Beta Was this translation helpful? Give feedback.
-
|
Thank you very much for this. I have two Flux questions:
Thank you |
Beta Was this translation helpful? Give feedback.
-
|
run into generation error, seems to be missing clip. I have all my clips as I am using OK in Comfyui but is it mixing teh T5 up?
followed by a lot of block and shape size mismatch log pre error:
I have in the clip folder: clip_g.safetensors Do I need to rename t5 base or t5xxl_fp8_e4m3fn.safetensors to t5.safetensors |
Beta Was this translation helpful? Give feedback.
-
|
I swapped the name in resources.py as test and now it is generating:
|
Beta Was this translation helpful? Give feedback.
-
There are models with custom trained CLIP L TEs. One example: Accompanying custom TE found here: |
Beta Was this translation helpful? Give feedback.
-
|
Flux controlnet inpainting(just find this don't know, if it helps make flux model more usable): |
Beta Was this translation helpful? Give feedback.
-
|
I try to add controlnet to flux but seems it has not yet been installed in Interstice cloud. Hope it can be added soon. |
Beta Was this translation helpful? Give feedback.
-
|
Hello, I have tested flux-Canny-ControlNet-v3. Safetensors and MistoLine (mistoline-flux. dev-v1), and both of them have poor image quality, which cannot compare to XL's ControlNet. Do you have any recommended Flux ControlNet for testing? |
Beta Was this translation helpful? Give feedback.
-
|
Hello Acly @Acly , I have added translations for the text added in version 1.24.0. |
Beta Was this translation helpful? Give feedback.
-
|
I have install MistoLine ,but i could not find in krita,but I can find canndy or depth |
Beta Was this translation helpful? Give feedback.
-
|
Got an error which is strange that flux model recognized as SD 1.5 |
Beta Was this translation helpful? Give feedback.
-
|
Hi, BNB-NF4 models is working? |
Beta Was this translation helpful? Give feedback.
-
|
Hello Acly @Acly , I have added translations for the text added in version 1.25.0. |
Beta Was this translation helpful? Give feedback.
-
|
is support for bnb nf4 possible? |
Beta Was this translation helpful? Give feedback.
-
|
i try using model flux-dev-bnb-nf4-v2.safetensor Server execution error: Error(s) in loading state_dict for Flux: |
Beta Was this translation helpful? Give feedback.
-
|
🇹🇼 來自台灣的感謝:謝謝你,Acly! 我們來自台灣的創作者社群,由衷感謝這位開源開發者,讓我們能在熟悉的 Krita 介面上,無痛整合 AI 生圖與繪圖流程,不只是工具,而是創作效率與品質的飛躍。 Kontext 幫助我們精準掌控構圖與風格一致性,Flux 更讓我們能靈活調整色彩、遮罩、參照圖等專業參數,真正讓 AI 成為可靠的創作夥伴。 我們代表台灣所有使用者,感謝 Acly 長期的開發與更新, |
Beta Was this translation helpful? Give feedback.
-
Can we now use fp4/int4 models too? This would be rad? |
Beta Was this translation helpful? Give feedback.
-
|
hi there everyone
installed t5 and clip_l in clip folder, and ea in vae folder aswell |
Beta Was this translation helpful? Give feedback.
-
|
I am having difficulty using the generation mode features like fill, expand, remove content, etc. Every time I hit the generation mode button, like lets say remove content, more images just keep generating. Unsure how to apply the effect. Other than that it works, but slowly, even on a GGUF (I am using dev Q8 GGUF). What must I do to utilize the full function and maybe make generating faster? |
Beta Was this translation helpful? Give feedback.
-
|
So, to use Kontext, you copy the built-in Flux preset and change Model Checkpoint to flux1-dev-kontext_fp8_scaled? |
Beta Was this translation helpful? Give feedback.
-
|
And what's the difference between |
Beta Was this translation helpful? Give feedback.
-
|
Anything special we need to take care of with Flux Krea or can this be a drop-in replacement for i.e. flux schnell workflow? |
Beta Was this translation helpful? Give feedback.
-
|
I'm getting the attached error on lots of flux models. are some just not supported or am i missing something also many gguf extensions are not visible to add in the program. any information would be great. and few others that are sd3 sample model used |
Beta Was this translation helpful? Give feedback.
-
|
this model and number of others don't show up. any ideas also get clip error on some models |
Beta Was this translation helpful? Give feedback.
-
|
That Nunchaku Kontext is pretty fast, and I was only trying it with a 1080x1080 and a Lora on top. Took less than 45 seconds on my laptop for an image relighting and I scheduled 5 more, all under 5 mins, the gpu was not as exhausted as before, the fan was almost silent, and I'm running int4 since I have a non-Blackwell. Can't wait to try Kre later this week. Very easy installation. Thank you very much EDIT: I tried the Krea model and it's nice and fast with nunchaku, and pretty impressive. I also tried the Kontext with animation but I keep getting 'cannot generate animation frames with an edit model'. I really hope this will be something we can use for animation in a future release. Cheers! |
Beta Was this translation helpful? Give feedback.
-
|
Finally Flux Nunchaku. Very happy with the update. |
Beta Was this translation helpful? Give feedback.
-
|
i can only find 2 custom Nunchaku on civitAi below are the two i found there fantastic. Thank you Acly for getting them supported, on the rtx3060 there as fast as sdxl https://civitai.com/models/1831757/flux-krea-dev-nunchaku-svdq-fp4-base-model https://civitai.com/models/1545303?modelVersionId=1861654
|
Beta Was this translation helpful? Give feedback.
-
|
Hi I'm trying to use NF4 flux and I need to edit comfy_workflow.py |
Beta Was this translation helpful? Give feedback.
-
|
Is there any real difference between |
Beta Was this translation helpful? Give feedback.














Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Flux: Current State
Plugin version: 1.38.0
Full Support (1.38.0)
Since version 1.38.0 you can now install Flux via the plugin's managed server. This includes Inpainting, ControlNet, Reference (Redux), Turbo LoRA for Live paint, Kontext, and SVDQuant/Nunchaku support.
-- Instructions below are now mostly useful if you are using a custom ComfyUI setup. --
Model Download
There are various options to get started with Flux, they differ in complexity and hardware requirements.
Flux can be used with the managed ComfyUI installed by the plugin, or a custom install. Either way you need to download the models to the indicated folder where your models are stored. For managed installs, the models folder is located directly inside your server install folder.
Option 1: Full Checkpoint
The easiest way is to download a full checkpoint which includes everything in one package. Choose any of the following models:
/models/checkpoints/models/checkpointsThe models must be placed into the
/models/checkpointsfolder. Then you will be able to choose them from the Checkpoint list in the Style settings.16 GB VRAM recommended, but it will work with less.
Option 2: Separate Diffusion Model
Many distributions of Flux contain only the diffusion model. In order to use the model you also need the text encoders and image auto-encoder, as separate files. Because all variants of Flux share the same text/image encoders, this gives you more flexibility and saves disk space.
Install the GGUF custom node
This is only needed if you have a custom ComfyUI install and want to use GGUF files. Please install city96/ComfyUI-GGUF. You can also find them in ComfyUI Manager.
Download the Text Encoders
Clip-L (clip_l.safetensors) →
/models/clipT5-XXL (t5xxl_fp8_e4m3fn.safetensors) →
/models/clipYou can also alternatively use T5 quantizations from city96/t5-v1_1-xxl-encoder-gguf -
Q5_K_Mis a good trade-off.Download the VAE
Flux AE (ae.safetensors) →
/models/vaeDownload a Diffusion Model
Pick one, depending on your available VRAM and desired quality:
These files need to go into the
/models/diffusion_modelsfolder!The plugin will show them in the same place as regular checkpoints and you can use them in the same way, as long as you have all prerequisite models.
Inpaint (Fill, Expand, Add/Remove Object)
Use one or the other method for inpainting (not both):
Option 1: AlimamaCreative ControlNet
This can be used in combination with any Flux [dev] model and works similar to existing inpainting models.
/models/controlnetPlease rename the file to
FLUX.1-dev-Controlnet-Inpainting-Beta.safetensors.If you don't want to use the inpaint model, use the "Generate (Custom)" option and uncheck "Seamless".
Option 2: Flux Fill
Download the Flux Fill model (safetensors or GGUF) and put it into
/models/diffusion_models. You need to select it as your model checkpoint in the Style settings.Now use this style for inpainting (Fill/Expand/Add). You must have a selection and 100% strength, it won't work for anything else.
Kontext (Edit model)
/models/diffusion_modelsYou also need the VAE and Text encoders from "Option 2: Separate Diffusion Model" above.
See the release page for more information on how to use it.
Redux (Reference Control Layer)
To make use of the Reference control layers with Flux, download the "Redux" model from Flux.1-Tools.
/models/clip_vision/models/style_modelsControlNet
The following models are supported:
/models/controlnet(rename!)/models/controlnet/models/controlnet/models/controlnet/models/controlnet(rename!)They are meant to be used with the "dev" version of Flux and don't work well with "schnell". They're also not available on cloud for now.
Flux Tools Lora
You can use the Lora provided by Flux-Tools as a replacement for the Depth and Canny control models listed above. They will be preferred if present.
/models/loras/models/lorasCommon Issues
The Flux workload is not installed
Try restarting Krita if you copied in a new model while it was running
Text encoder or VAE not found
Make sure models are in the correct folder and have the correct name. Otherwise they cannot be found.
Please check client.log file for more info.
Beta Was this translation helpful? Give feedback.
All reactions