|
1 |
| -# Deep Graph-based Spatial Consistency for Robust Non-rigid Point Cloud Registration |
| 1 | +# Deep Graph-Based Spatial Consistency for Robust Non-Rigid Point Cloud Registration |
2 | 2 |
|
3 | 3 | PyTorch implementation of the paper:
|
4 | 4 |
|
5 |
| -Deep Graph-based Spatial Consistency for Robust Non-rigid Point Cloud Registration. |
| 5 | +[Deep Graph-Based Spatial Consistency for Robust Non-Rigid Point Cloud Registration](http://arxiv.org/abs/2303.09950) |
6 | 6 |
|
7 |
| -[Zheng Qin](https://scholar.google.com/citations?user=DnHBAN0AAAAJ), [Hao Yu](https://scholar.google.com/citations?user=g7JfRn4AAAAJ), Changjian Wang, Yuxing Peng, and [Kai Xu](https://scholar.google.com/citations?user=GuVkg-8AAAAJ). |
| 7 | +[Zheng Qin](https://scholar.google.com/citations?user=DnHBAN0AAAAJ), [Hao Yu](https://scholar.google.com/citations?user=g7JfRn4AAAAJ), |
| 8 | +Changjian Wang, Yuxing Peng, and [Kai Xu](https://scholar.google.com/citations?user=GuVkg-8AAAAJ). |
8 | 9 |
|
9 | 10 | ## Introduction
|
10 | 11 |
|
11 |
| -We study the problem of outlier correspondence pruning for non-rigid point cloud registration. In rigid registration, spatial consistency has been a commonly used criterion to discriminate outliers from inliers. It measures the compatibility of two correspondences by the discrepancy between the respective distances in two point clouds. However, spatial consistency no longer holds in non-rigid cases and outlier rejection for non-rigid registration has not been well studied. In this work, we propose Graph-based Spatial Consistency Network (GraphSCNet) to filter outliers for non-rigid registration. Our method is based on the fact that non-rigid deformations are usually locally rigid, or local shape preserving. We first design a local spatial consistency measure over the deformation graph of the point cloud, which evaluates the spatial compatibility only between the correspondences in the vicinity of a graph node. An attention-based non-rigid correspondence embedding module is then devised to learn a robust representation of non-rigid correspondences from local spatial consistency. Despite its simplicity, GraphSCNet effectively improves the quality of the putative correspondences and attains state-of-the-art performance on three challenging benchmarks. |
| 12 | +We study the problem of outlier correspondence pruning for non-rigid point cloud registration. In rigid registration, |
| 13 | +spatial consistency has been a commonly used criterion to discriminate outliers from inliers. It measures the |
| 14 | +compatibility of two correspondences by the discrepancy between the respective distances in two point clouds. However, |
| 15 | +spatial consistency no longer holds in non-rigid cases and outlier rejection for non-rigid registration has not been |
| 16 | +well studied. In this work, we propose Graph-based Spatial Consistency Network (GraphSCNet) to filter outliers for |
| 17 | +non-rigid registration. Our method is based on the fact that non-rigid deformations are usually locally rigid, or local |
| 18 | +shape preserving. We first design a local spatial consistency measure over the deformation graph of the point cloud, |
| 19 | +which evaluates the spatial compatibility only between the correspondences in the vicinity of a graph node. An |
| 20 | +attention-based non-rigid correspondence embedding module is then devised to learn a robust representation of non-rigid |
| 21 | +correspondences from local spatial consistency. Despite its simplicity, GraphSCNet effectively improves the quality of |
| 22 | +the putative correspondences and attains state-of-the-art performance on three challenging benchmarks. |
| 23 | + |
| 24 | + |
12 | 25 |
|
13 | 26 | ## News
|
14 | 27 |
|
15 |
| -2023.02.28: This work is accepted by CVPR 2023. Code and models will be released soon. |
| 28 | +2023.06.15: Code and models on 4DMatch released. |
| 29 | + |
| 30 | +2023.02.28: This work is accepted by CVPR 2023. |
| 31 | + |
| 32 | +## Installation |
| 33 | + |
| 34 | +Please use the following command for installation: |
| 35 | + |
| 36 | +```bash |
| 37 | +# 1. It is recommended to create a new environment |
| 38 | +conda create -n geotransformer python==3.8 |
| 39 | +conda activate geotransformer |
| 40 | + |
| 41 | +# 2. Install vision3d following https://github.com/qinzheng93/vision3d |
| 42 | +``` |
| 43 | + |
| 44 | +The code has been tested on Python 3.8, PyTorch 1.13.1, Ubuntu 22.04, GCC 11.3 and CUDA 11.7, but it should work with |
| 45 | +other configurations. |
| 46 | + |
| 47 | +## 4DMatch |
| 48 | + |
| 49 | +### Data preparation |
| 50 | + |
| 51 | +The 4DMatch dataset can be downloaded from [DeformationPyramid](https://github.com/rabbityl/DeformationPyramid). We |
| 52 | +provide the correspondences extracted by GeoTransformer in the release page. The data should be organized as follows: |
| 53 | + |
| 54 | +```text |
| 55 | +--data--4DMatch--train--abe_CoverToStand |
| 56 | + | |--... |
| 57 | + |--val--amy_Situps |
| 58 | + | |--... |
| 59 | + |--4DMatch-F--AJ_SoccerPass |
| 60 | + | |--... |
| 61 | + |--4DLoMatch-F--AJ_SoccerPass |
| 62 | + | |--... |
| 63 | + |--correspondences--val--amy_Situps |
| 64 | + | |--... |
| 65 | + |--4DMatch-F--AJ_SoccerPass |
| 66 | + | |--... |
| 67 | + |--4DLoMatch-F--AJ_SoccerPass |
| 68 | + |--... |
| 69 | +``` |
| 70 | + |
| 71 | +### Training |
| 72 | + |
| 73 | +The code for 4DMatch is in `experiments/graphscnet.4dmatch.geotransformer`. Use the following command for training. |
| 74 | + |
| 75 | +```bash |
| 76 | +CUDA_VISIBLE_DEVICES=0 python trainval.py |
| 77 | +``` |
| 78 | + |
| 79 | +### Testing |
| 80 | + |
| 81 | +Use the following command for testing. |
| 82 | + |
| 83 | +```bash |
| 84 | +# 4DMatch |
| 85 | +CUDA_VISIBLE_DEVICES=0 python test.py --test_epoch=EPOCH --benchmark=4DMatch-F |
| 86 | +# 4DLoMatch |
| 87 | +CUDA_VISIBLE_DEVICES=0 python test.py --test_epoch=EPOCH --benchmark=4DLoMatch-F |
| 88 | +``` |
| 89 | + |
| 90 | +`EPOCH` is the epoch id. |
| 91 | + |
| 92 | +We also provide pretrained weights in `weights`, use the following command to test the pretrained weights. |
| 93 | + |
| 94 | +```bash |
| 95 | +CUDA_VISIBLE_DEVICES=0 python test.py --checkpoint=/path/to/GraphSCNet/weights/graphscnet.pth --benchmark=4DMatch-F |
| 96 | +``` |
| 97 | + |
| 98 | +Replace `4DMatch` with `4DLoMatch` to evaluate on 4DLoMatch. |
| 99 | + |
| 100 | +### Results |
| 101 | + |
| 102 | +| Benchmark | Prec | Recall | EPE | AccS | AccR | OR | |
| 103 | +|:----------|:----:|:------:|:-----:|:----:|:----:|:----:| |
| 104 | +| 4DMatch | 92.2 | 96.9 | 0.043 | 72.3 | 84.4 | 9.4 | |
| 105 | +| 4DLoMatch | 82.6 | 86.8 | 0.121 | 41.0 | 58.3 | 21.0 | |
| 106 | + |
| 107 | +## Citation |
| 108 | + |
| 109 | +```bibtex |
| 110 | +@inproceedings{qin2023deep, |
| 111 | + title={Deep Graph-Based Spatial Consistency for Robust Non-Rigid Point Cloud Registration}, |
| 112 | + author={Zheng Qin and Hao Yu and Changjian Wang and Yuxing Peng and Kai Xu}, |
| 113 | + booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, |
| 114 | + month={June}, |
| 115 | + year={2023}, |
| 116 | + pages={5394-5403} |
| 117 | +} |
| 118 | +``` |
| 119 | + |
| 120 | +## Acknowledgements |
| 121 | + |
| 122 | +- [vision3d](https://github.com/qinzheng93/vision3d) |
| 123 | +- [GeoTransformer](https://github.com/qinzheng93/GeoTransformer) |
| 124 | +- [PointDSC](https://github.com/XuyangBai/PointDSC) |
| 125 | +- [lepard](https://github.com/rabbityl/lepard) |
| 126 | +- [DeformationPyramid](https://github.com/rabbityl/DeformationPyramid) |
0 commit comments