MatAnyone is a practical human video matting framework supporting target assignment, with stable performance in both semantics of core regions and fine-grained boundary details.
🎥 For more visual results, go checkout our project page
- [2025.02.19] Modified python to better function on windows
- [2025.02] Release inference codes and gradio demo 🤗
- [2025.02] This repo is created.
-
Clone Repo
git clone https://github.com/pq-yang/MatAnyone cd MatAnyone -
Create Environment and Install Dependencies
Run "run app.bat" script in the "hugging_face" folder. Note: This needs Python 3.12 installed along with CUDA 12
Download our pretrained model from MatAnyone v1.0.0 to the pretrained_models folder (pretrained model can also be automatically downloaded during the first inference).
The directory structure will be arranged as:
pretrained_models
|- matanyone.pth
By launching, an interactive interface will appear as follow:
@InProceedings{yang2025matanyone,
title = {{MatAnyone}: Stable Video Matting with Consistent Memory Propagation},
author = {Yang, Peiqing and Zhou, Shangchen and Zhao, Jixin and Tao, Qingyi and Loy, Chen Change},
booktitle = {arXiv preprint arXiv:2501.14677},
year = {2025}
}
