BLANKET is a two-stage pipeline for seamless, expression-preserving face anonymization in infant videos. It replaces identities with synthetic baby faces while maintaining gaze, head pose, and emotional expression, enabling ethical data sharing and robust downstream analytics.
Key contributions:
- Two-stage design: diffusion-based inpainting + temporally-consistent swap
- Attribute preservation: expression, gaze, head orientation, eye/mouth openness
- High downstream performance: ~90% detection AP, ~97% pose estimation retention
- Outperforms SOTA: beats DeepPrivacyV2 on de-identification, perceptual metrics and downstream task performance
Warning
This code is experimental and not yet production-ready. Anonymization results may be imperfect and may miss detections. Full reliability will be achieved once the “missing detections” problem is solved (see Roadmap below).
- Dec 2025: Cleaner version of the code available
- Sep 2025: Code published
- May 2025: Paper accepted to ICDL 2025! 🎉
To install BLANKET and its dependencies:
git clone https://github.com/ctu-vras/blanket-infant-face-anonym.git
cd blanket-infant-face-anonym
# Install dependencies using pyproject.toml
pip install -e .
# if using uv
uv syncGPU acceleration
- macOS (Apple Silicon): Uses CoreML by default for GPU acceleration
- NVIDIA GPU: Requires cuDNN(https://developer.nvidia.com/cudnn) to be installed for onnxruntime-gpu support
- on Ubuntu can be installed via
apt install cudnn
- on Ubuntu can be installed via
Pre-trained models
- YOLOv11L-face - download from Ultralytics
- Stable Diffusion XL Inpainting
- ControlNet OpenPose
- ControlNet Canny
- SDXL Refiner
- SPIGA Landmarks
- inswapper_128_fp16
- GFPGAN
To run the image anonymization demo:
- Set your input and output folders in
blanket/configs/config.yaml:
input_folder: path to your images (default:../data/)output_folder: path for results (default:outputs/)
- Run the demo script:
python run_video.py data/000056_segment.mp4
#if using uv
uv run python run_video.py data/000056_segment.mp4
# or
source .venv/bin/activate
python run_video.py data/000056_segment.mp4
To use existing identity run:
uv run python run_video.py data/000071_segment.mp4 --identity ./data/baby4.png
This will process all images in your input folder, apply face detection and three anonymization methods (black box, pixelation, gaussian blur), and save composite results to your output folder.
You can adjust detection and anonymization settings in blanket/configs/config.yaml.
BLANKET outperforms DeepPrivacy V2 in all measured metrics.
| Metric | BLANKET | DeepPrivacy2 |
|---|---|---|
| Identity cosine distance (↓) | 0.11 ± 0.18 | 0.19 ± 0.26 |
| Emotion preservation (↑) | 0.51 ± 0.13 | 0.27 ± 0.11 |
| Temporal landmark corr. (↑) | 0.956 ± 0.064 | 0.860 ± 0.140 |
| Detection AP vs. orig. (↑) | 90.7 mAP | 81.5 mAP |
| Pose AP vs. orig. (↑) | 97.2 mAP | 79.1 mAP |
We will continue refining BLANKET with a focus on quality and reliability:
- Implement Stable Diffusion–based inpainting
- Implement FaceFusion in video and a video demo
- Ensure robust anonymization in frames where faces are not detected
- Partially solved by reusing previous frames -- identity won't leak to anonymized video but artifacts could occur.
Supported by GA CR 25-18113S, EC Digital Europe CEDMO 2.0 101158609, CTU SGS23/173/OHK3/3T/13. Thanks to Adéla Šubrtová for early feedback. Thanks to Max Familly Fun for the banner picture.
If you use BLANKET, please cite:
@inproceedings{hadera2025BLANKET,
author = {Hadera, Ditmar and Cech, Jan and Purkrabek, Miroslav and Hoffmann, Matej},
booktitle = {2025 IEEE International Conference on Development and Learning (ICDL)},
month = sep,
pages = {1--8},
title = {{BLANKET: Anonymizing Faces in Infant Video Recordings}},
year = {2025}
}- Repository: https://github.com/facefusion/facefusion
- License: OpenRAIL-AS (Open Responsible AI License)
- Copyright: (c) 2025 Henry Ruhs
- Location:
external/facefusion/ - Usage: Face swapping and enhancement functionality
