Running FlashVSR on lower VRAM without any artifacts.
[📃中文版本]
- Added long video pipeline that significantly reduces VRAM usage when upscaling long videos.
- Initial this project, introducing features such as
tile_ditto significantly reducing VRAM usage.
- Replaced
Block-Sparse-AttentionwithSparse_Sage, removing the need to compile any custom kernels. - Added support for running on RTX 50 series GPUs.
- mode:
tiny-> faster (default);full-> higher quality - scale:
4is always better, unless you are low on VRAM then use2 - color_fix:
Use wavelet transform to correct the color of output video. - tiled_vae:
Set to True for lower VRAM consumption during decoding at the cost of speed. - tiled_dit:
Significantly reduces VRAM usage at the cost of speed. - tile_size, tile_overlap:
How to split the input video. - unload_dit:
Unload DiT before decoding to reduce VRAM peak at the cost of speed.
cd ComfyUI/custom_nodes
git clone https://github.com/lihaoyun6/ComfyUI-FlashVSR_Ultra_Fast.git
python -m pip install -r ComfyUI-FlashVSR_Ultra_Fast/requirements.txt- Download the entire
FlashVSRfolder with all the files inside it from here and put it in theComfyUI/models
├── ComfyUI/models/FlashVSR
| ├── LQ_proj_in.ckpt
| ├── TCDecoder.ckpt
| ├── diffusion_pytorch_model_streaming_dmd.safetensors
| ├── Wan2.1_VAE.pth
- FlashVSR @OpenImagingLab
- Sparse_SageAttention @jt-zhang
- ComfyUI @comfyanonymous
