edm_video_demo.mp4
conda env create -f environment.yaml
conda activate edm
conda install pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 pytorch-cuda=11.8 -c pytorch -c nvidia -y
pip install -r requirements.txt We provide our pretrained model and a fixed onnx model in google drive and Baidu Netdisk. Please place ckpt in folder weights/ and onnx model in folder deploy/.
See demo_single_pair.ipynb
See subdirectory deploy
Exporting onnx model first:
cd deploy
pip install -r requirements_deploy.txt
python export_onnx.pyRun demo on ONNX Runtime using TensorRT backend:
python run_onnx.pyRefer to edm_onnx_cpp
Setup the testing subsets of ScanNet and MegaDepth first.
The test and training can be downloaded by download link provided by LoFTR.
Create symlinks from the previously downloaded datasets to data/{{dataset}}/test.
# set up symlinks
ln -s /path/to/scannet-1500-testset/* data/scannet/test
ln -s /path/to/megadepth-1500-testset/* data/megadepth/testbash scripts/reproduce_test/outdoor.shbash scripts/reproduce_test/indoor.shPrepare training data according to the settings of LoFTR.
bash scripts/reproduce_train/outdoor.shPart of the code is based on EfficientLoFTR and RLE. We thank the authors for their useful source code.
If you find this project useful, please cite:
@InProceedings{Li_2025_ICCV,
author = {Li, Xi and Rao, Tong and Pan, Cihui},
title = {EDM: Efficient Deep Feature Matching},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2025},
pages = {26198-26208}
}