Skip to content

oufuzhao/MR-FIQA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MR-FIQA: Face Image Quality Assessment with Multi-Reference Representations from Synthetic Data Generation

           

🔎 Description

This is the official repository for the following paper

MR-FIQA: Face Image Quality Assessment with Multi-Reference Representations from Synthetic Data Generation
Fu-Zhao Ou     Chongyi Li     Shiqi Wang     Sam Kwong
IEEE/CVF International Conference on Computer Vision (ICCV), 2025.

📘 Privacy concerns are limiting the use of real face datasets for training Face Image Quality Assessment (FIQA) models. To address this, we pioneer a synthetic dataset for FIQA, generated via quality-controllable methods like latent space alignment in Stable Diffusion and 3D facial editing. We also propose MR-FIQA method that leverages multi-reference representations among recognition embedding, spatial, and visual-language domains for accurate quality annotation. Experiments validate our approach, showing SynFIQA and MR-FIQA effectively advance FIQA research.

⚙️ Environment

  • Stage-1 Generation : The code in this stage is mainly built on the PyTorch framework. Specifically, it requires PyTorch version 1.10.0 or later, along with the torchvision library. Additionally, you need the CUDA-enabled GPU machine to run the code effectively. Make sure to install the following dependencies:
$ conda create -n synfiqa python=3.8.16
$ conda activate synfiqa
$ conda install pytorch==2.2.0 torchvision==0.17.0 torchaudio==2.2.0 pytorch-cuda=11.8 numpy==1.22.0 -c pytorch -c nvidia 
$ pip install pyrootutils opencv-python tqdm facenet-pytorch==2.5.3 scikit-image tf-keras
$ conda install -c conda-forge dlib tensorflow-gpu=2.13 deepface
  • Stage-2 Generation : This stage mainly relies on the PyTorch and Pytorch3D framework. Please follow PyTorch3D-Install to install the suitable Pytorch3D based on the version of Python, PyTorch, and CUDA.
  • You can also find and download the appropriate version of PyTorch at Anaconda-PyTorch. The py??, cu???, and pyt??? indicate the Python, CUDA, and PyTorch versions. For instance, pytorch3d-0.7.8-py38_cu118_pyt220.tar.bz2 should be installed under the Python=3.8, CUDA=11.8, and PyTorch=2.2.0. Make sure to install the following dependencies:
$ conda install pytorch3d-0.7.8-py38_cu118_pyt220.tar.bz2
$ pip install gradio==3.34.0 yacs chumpy kornia==0.6.12 iopath fvcore face-alignment==1.3.5 triton==2.2.0 omegaconf open-clip-torch==2.24.0 transformers==4.24.0 einops xformers==0.0.24 pytorch-lightning==1.9.0
  • Quality Annotation : The code of quality annotation is mainly built on BLIP and facenet-pytorch. Please download BLIP-Model and place in Pretrained-Models/.
$ pip install fairscale==0.4.4 pycocoevalcap scipy==1.10.1

📂 Data Generation and Annotation

🧬 1. Unconditional Generation of Pseudo-ID Images (Stage-1)

  • A pretrained model of an unconditional generator can be downloaded at this Google-Drive-Link and placed in Pretrained-Models/Unc-Diff-Stage1.pth.
  • You can also modify the <save_dir> in Stage-1/generate_pseudo_ID.py to change the saving path of the generated images.
$ python Stage-1/generate-pseudo-ID.py

🗃️ 2. Data Filtering for High-Quality Pseudo-ID Images

  • To ensure high-quality pseudo identities, the generated images will be filtered as required to eliminate samples that do not meet the requirements.
  • Download the ZIP-File containing the models required for data filtering, and unzip it to Pretrained-Models/. Herein, it is worth mentioning that Retinaface_Resnet50.pth and Pose-LP.pth will also be used in the following Quality-Controlled Generation.
  • If you modify <save_dir> in the previous unconditional generation, please modify the <data_path> in Stage-1/data_filtering.py simultaneously.
$ python Stage-1/data_filtering.py

🕵🏻 3. Quality-Controlled Generation (Stage-2)

  • Download the DECA-Stage2-Models-File.zip (10.48GB) and unzip the data and models to Pretrained-Models/.
  • The default <args.id_data_path> refers to <data_path> in Data Filtering of Stage-1. If you modify the path during Stage-1, please make the corresponding changes in Stage-2/conditional_sampling.py. You can also modify the saving path of generated samples <args.save_path>.
  • The generated dataset is divided into sub-files based on the ID. Each sub-folder contains reference and degraded samples. Among them, the reference samples are labeled as Ref-??.jpg, while the degraded samples are ?.jpg.
  • Meanwhile, the absolute yaw angle, downsampling intensity, and blur intensity parameters of the degraded samples will be recorded and used in the subsequent Quality Annotation process.
$ python Stage-2/sampling_annotation.py

🏷 4. Quality Annotation

  • After the samples are generated, quality annotation will be performed. The quality scores of generated data will be output to <args.save_path>/Quality-Scores.txt
  • The generated dataset SynFIQA.zip (1.6GB) is availabel for download, where the data has been cropped and aligned to 112x112.
  • In summary, the dataset is organized as:
──────────────────────────────────────────
|── data                                                           
│   ├── 0                                                                                                  
│   │   └── 0.jpg 
│   │   └── 1.jpg 
|   |   └── ...
│   │   └── Ref-0.jpg   
│   │   └── Ref-1.jpg                                        
|   |   └── ...
|   ├── 1
│   │   └── 0.jpg 
│   │   └── 1.jpg 
|   |   └── ...
│   │   └── Ref-0.jpg   
│   │   └── Ref-1.jpg  
|   |   └── ...
|   ...
├── Quality-Scores.txt
──────────────────────────────────────────

🎯 Leaderboard for FIQA Models

The Leaderboard of different FIQA models trained on real (CASIA-WebFace) and synthetic (Syn.) datasets for average pAUC (↓) under various False Match Rates (FMRs) is reported in the following. These models are trained via CR-FIQA. You can also download the explicit checkpoint via the URL in the table below.

Models Generator Type 1E-2 1E-3 1E-4 URL
CASIA-WebFace - Real 0.719 0.701 0.718 Google
DigiFace-1M Digital Rendering Syn. 0.764 0.801 0.810 Google
DCFace Diffusion–Based Syn. 0.833 0.813 0.819 Google
SFace2 GAN-Based Syn. 0.838 0.837 0.838 Google
HSFace-10K GAN-Based Syn. 0.894 0.869 0.859 Google
IDiff-Face Diffusion–Based Syn. 0.873 0.873 0.846 Google
GANDiffFace GAN-Diffusion–Based Syn. 0.838 0.830 0.828 Google
SynFIQA (Ours) Diffusion–Based Syn. 0.797 0.800 0.787 Google
SynFIQA++ (Ours) Diffusion–Based Syn. 0.748 0.730 0.742 Google
SynFIQA + CASIA (Ours) - Real + Syn. 0.715 0.652 0.644 Google

🚀 FIQA Inference

You can use the above FIQA model to predict the FIQA score. Modify the <img_path> and <model_path> in fiqa_inference.py.

$ python fiqa_inference.py

❣️ Citing this Repository

Please kindly cite the following paper if you find our work helpful. Thank you so much~ 🙏🙏🙏

@InProceedings{Ou_2025_ICCV,
    author    = {Ou, Fu-Zhao and Li, Chongyi and Wang, Shiqi and Kwong, Sam},
    title     = {MR-FIQA: Face Image Quality Assessment with Multi-Reference Representations from Synthetic Data Generation},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    pages     = {12915-12925},
    year      = {2025}
}

💡 Acknowledgements

Our code is primarily adopted from the following projects and we would like to express our great gratitude to the authors of these projects for their exceptional contributions and valuable works!

🔑 License

This project is licensed under the Attribution-NonCommercial 4.0 International (CC-BY-NC 4.0 license). Please check LICENSE for details.