点击查看 中文文档
Diffusion-Pipe In ComfyUI Custom Node is a powerful extension plugin that provides complete Diffusion model training and fine-tuning capabilities for ComfyUI. This project allows users to configure and launch training for various advanced AI models within ComfyUI's graphical interface, supporting both LoRA and full fine-tuning, covering the most popular image generation and video generation models available today.You can train Qwen lora with 16g Vram
Video Demo: https://www.bilibili.com/video/BV1CRk9BYErw/?vd_source=7fd137e57a445e84bd9ffea9b632c98d
- 20251130: Z-Image support, supports both Diffusers and ComfyUI format models
You need to download the latest diffusers development version to support training, e.g.:
E:\comfyui\ComfyUI_windows_portable\python_embeded_DP\python.exe -m pip install git+https://github.com/huggingface/diffusers训练 Z-Image-Turbo 时使用
merge_adapters = ['/data2/imagegen_models/comfyui-models/zimage_turbo_training_adapter_v1.safetensors']
Model files support using the ComfyUI version.
Also supports diffusers
If training Z-Image-Turbo, make sure to merge the adapter.
Credit to Ostris and AI Toolkit for making this adapter.
Z-Image LoRAs are saved in ComfyUI format. This is different from Diffusers format.
-
20251026:support eval
-
20251030:Supports training Aura models
-
20251103:support MultiImage Edit (qwen2509)
-
20251105:support mask trainning,Fix off-by-one error in plots when using examples as x-axis,Allow using captions.json without tar files,add reset_optimizer flag,--reset_optimizer_params flag(Reset optimizer parameters, which allows resetting the optimizer during resuming training),Fix datasets issue,Cast to float16 in dataset caching to cut size on disk in half
https://huggingface.co/TianDongL/DiffusionPipeInComfyUI_WinYou still need to download Microsoft MPI to prepare the deepspeed environment for Windows: https://www.microsoft.com/en-us/download/details.aspx?id=105289
Download and restart the computer
git clone --recurse-submodules https://github.com/TianDongL/Diffusion_pipe_in_ComfyUI_Win.git- If you haven't installed the submodules, follow these steps
- If you don't complete this step, training will not work
git submodule initgit submodule updateconda create -n comfyui_DP python=3.11conda activate comfyui_DPpip install torch==2.7.1 torchvision==0.22.1 torchaudio==2.7.1 --index-url https://download.pytorch.org/whl/cu128- You need to install pre-compiled wheels for Windows. You can find the compiled wheels in my Releases. This project requires deepspeed==0.17.0 https://github.com/TianDongL/Diffusion_pipe_in_ComfyUI_Win/releases
pip install E:/ComfyUI/deepspeed-0.17.0+720787e7-cp311-cp311-win_amd64.whl- And flash-attn==2.8.1
pip install E:/ComfyUI/flash_attn-2.8.1-cp311-cp311-win_amd64.whl- Also bitsandbytes compiled for Windows
pip install bitsandbytes --prefer-binary --extra-index-url=https://jllllll.github.io/bitsandbytes-wheels/windows/index.htmlcd /ComfyUI/custom_nodes/Diffusion_pipe_in_ComfyUI_Winpip install -r requirements.txt- You are responsible for backing up your portable environment
- My wheels are all compiled under Torch 2.7.1+cu128-cp311
Skip this step if you already meet the requirements
E:/ComfyUI_windows_portable/python_embeded/python.exe -m pip install torch==2.7.1 torchvision==0.22.1 torchaudio==2.7.1 --index-url https://download.pytorch.org/whl/cu128Install necessary dependencies directly
You need to install pre-compiled wheels for Windows. You can find the compiled wheels in my Releases. This project requires deepspeed==0.17.0 https://github.com/TianDongL/Diffusion_pipe_in_ComfyUI_Win/releases
E:/ComfyUI_windows_portable/python_embeded/python.exe -m pip install E:/ComfyUI_windows_portable/python_embeded_DP/deepspeed-0.17.0+720787e7-cp311-cp311-win_amd64.whlAnd flash-attn==2.8.1
E:/ComfyUI_windows_portable/python_embeded/python.exe -m pip install E:/ComfyUI_windows_portable/python_embeded_DP/flash_attn-2.8.1-cp311-cp311-win_amd64.whlAnd bitsandbytes compiled for Windows
E:/ComfyUI_windows_portable/python_embeded/python.exe -m pip install bitsandbytes --prefer-binary --extra-index-url=https://jllllll.github.io/bitsandbytes-wheels/windows/index.htmlcd /ComfyUI/custom_nodes/Diffusion_pipe_in_ComfyUI_Win.gitE:/ComfyUI_windows_portable/python_embeded/python.exe -m pip install -r requirements.txtTo get you started quickly, I've provided a pre-configured ComfyUI workflow file:
📋 Click to Import Complete Workflow
Simply drag this file into the ComfyUI interface to import the complete training workflow with all necessary node configurations.
Models can be stored in the ComfyUI model directory
Disable the Train node when debugging
kill port will stop all monitoring processes on the current port
- 🎯 Visual Training Configuration: Graphically configure training parameters through ComfyUI nodes
- 🚀 Multi-Model Support: Support for 20+ latest Diffusion models
- 💾 Flexible Training Methods: Support for both LoRA training and full fine-tuning
- ⚡ High-Performance Training: Distributed training support based on DeepSpeed
- 📊 Real-Time Monitoring: Integrated TensorBoard for monitoring training progress
- 🎥 Video Training: Support for training video generation models
- 🖼️ Image Editing: Support for training image editing models
- On Windows, it seems 16GB VRAM can train Qwen, which is quite Confusing
- Operating System: Windows 10/11
- ComfyUI: Latest version
This plugin supports over 20 of the latest Diffusion models, including:
| Model | LoRA | Full Fine Tune | fp8/quantization |
|---|---|---|---|
| SDXL | ✅ | ✅ | ❌ |
| Flux | ✅ | ✅ | ✅ |
| LTX-Video | ✅ | ❌ | ❌ |
| HunyuanVideo | ✅ | ❌ | ✅ |
| Cosmos | ✅ | ❌ | ❌ |
| Lumina Image 2.0 | ✅ | ✅ | ❌ |
| Wan2.1 | ✅ | ✅ | ✅ |
| Chroma | ✅ | ✅ | ✅ |
| HiDream | ✅ | ❌ | ✅ |
| SD3 | ✅ | ❌ | ✅ |
| Cosmos-Predict2 | ✅ | ✅ | ✅ |
| OmniGen2 | ✅ | ❌ | ❌ |
| Flux Kontext | ✅ | ✅ | ✅ |
| Wan2.2 | ✅ | ✅ | ✅ |
| Qwen-Image | ✅ | ✅ | ✅ |
| Qwen-Image-Edit-2509 | ✅ | ✅ | ✅ |
| HunyuanImage-2.1 | ✅ | ✅ | ✅ |
| AuraFlow | ✅ | ❌ | ✅ |
| Z-Image | ✅ | ✅ | ❌ |
This project is open-sourced under the GPL License
Issues and Pull Requests are welcome!
- Fork the project
- Create a feature branch
- Commit your changes
- Submit a Pull Request
Thanks to the following projects and teams:
- ComfyUI team
- @tdrussell, the original author of Diffusion_Pipe
- Hugging Face Diffusers
- DeepSpeed team
- Original authors of all models



