Skip to content

HVision-NKU/ControlSR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ControlSR: Taming Diffusion Models for Consistent Real-World Image Super Resolution

This repository contains the official implementation of the following paper:

ControlSR: Taming Diffusion Models for Consistent Real-World Image Super Resolution
Yuhao Wan 1, 2∗†, Peng-Tao Jiang 2∗, Qibin Hou 1‡, Hao Zhang 2, Jinwei Chen 2, Ming-Ming Cheng 1, Bo Li 2
1Nankai University
2vivo Mobile Communication Co., Ltd
Equal contribution
Intern at vivo Mobile Communication Co., Ltd
Corresponding author

[Paper] [Code]

💡 Brief Introduction

ControlSR is a new SD-based Real-ISR method with SOTA performance. ControlSR tames Diffusion Models by effectively utilizing LR information. The designed DPM and GSPM can provide refined control signals. ControlSR is capable of providing results more consistent with the LR image while maintaining strong generative capabilities.

Overview of our ControlSR:

🐍 Installation

## git clone this repository
git clone https://github.com/HVision-NKU/ControlSR.git
cd ControlSR

pip install -r requirements.txt

🔄 Quick Inference

Download the pretrained Control model from Google Drive. You can put the model into experiment/.

You can download RealSR and DRealSR from StableSR.

python inference.py \
--input [path to the testsets] \
--config configs/model/cldm.yaml \
--ckpt experiment/ControlSR.ckpt \
--steps 50 \
--sr_scale 4 \
--color_fix_type adain \
--output experiment/result/ \
--device cuda \
--cfg_scale 7 \
--ELA_flag \
--LLA_flag \
--ELA_steps 11 \
--LLA_steps 45 \
--ELA_scale 0.01 \
--LLA_scale 0.01

We use BasicSR to test PSNR and SSIM, and pyiqa to test LPIPS, NIQE, MUSIQ, MANIQA, and CLIPIQA. The settings for pyiqa are as follows:

lpips_metric = pyiqa.create_metric('lpips', crop_border=4)
niqe_metric = pyiqa.create_metric('niqe')
musiq_metric = pyiqa.create_metric('musiq')
maniqa_metric = pyiqa.create_metric('maniqa-pipal')
clipiqa_metric = pyiqa.create_metric('clipiqa')

🧠 Train

Prepare training data

  1. Download DIV2K, Flickr2K, DIV8K, OST, and the first 10K face images from FFHQ.
  2. Prepare the file list.
# find
find [path to the datasets] -type f >> dataset/files.list

# shuf
shuf dataset/files.list > dataset/files_shuf.list

# split
head -n 25000 dataset/files_shuf.list > dataset/train.list
tail -n +25001 dataset/files_shuf.list > dataset/val.list
  1. Fill in the training configuration file and validation configuration file with appropriate values.

Download the pretrained models

  1. Download pretrained Stable Diffusion v2.1: put the v2-1_512-ema-pruned.ckpt into weights/.
  2. Download pretrained CLIP: put the open_clip_config.json and open_clip_pytorch_model.bin into weights/laion2b_s32b_b79k.
  3. Fill in the modules file with appropriate values.

Prepare the initial weights

python scripts/make_init_weight.py \
--root_path [path to the ControlSR] \
--cldm_config configs/model/cldm.yaml \
--sd_weight weights/v2-1_512-ema-pruned.ckpt \
--output weights/init_controlsr.ckpt

Train!

  1. Fill in the configuration file with appropriate values.
  2. Run the following command.
python train.py --config configs/train_cldm.yaml

📚 Citation

Please cite us if our work helps your research.

@article{wan2025controlsr,
  title={ControlSR: Taming Diffusion Models for Consistent Real-World Image Super Resolution},
  author={Wan, Yuhao and Jiang, Peng-Tao and Hou, Qibin and Zhang, Hao and Chen, Jinwei and Cheng, Ming-Ming and Li, Bo},
  journal={arXiv preprint arXiv:2410.14279},
  year={2025}
}

📄 License

This project is released under the Apache 2.0 license.

🤝 Acknowledgement

This project is based on ControlNet, DiffBIR, pyiqa and BasicSR. Thanks for their awesome work.

📩 Contact

If you have any questions, please feel free to contact with me at [email protected].

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages