Source code for our 3DV 2021 paper Fusing Posture and Position Representations for Point Cloud-based Hand Gesture Recognition [pdf].
Please first install the following dependencies
- Python3 (we use 3.8.3)
- numpy
- pytorch (we use 1.6.0)
- bps
- yacs
- cv2
- sklearn
- scikit-image
- imageio
- pytorch-geometric (only for ablation experiment with PointNet++)
- Download the Shrec'17 dataset from http://www-rech.telecom-lille.fr/shrec2017-hand/. Create a directory
/dataset/shrec17and moveHandGestureDataset_SHREC2017to this directory. We recommend to create a symlink. - Create a directory
/dataset/shrec17/Processed_HandGestureDataset_SHREC2017. Subsequently, executecd dataandpython shrec17_process.pyto generate point cloud sequences from the original depth images.
- Download the DHG dataset from http://www-rech.telecom-lille.fr/DHGdataset/. Create a directory
/dataset/DHG/rawand move the extracted dataset to this directory. We recommend to create a symlink. - Create a directory
/dataset/DHG/processed. Subsequently, executecd dataandpython dhg_process.pyto generate point cloud sequences from the original depth images.
- In
/configs/defaults.py, modify_C.BASE_DIRECTORYin line 5 to the root directory where you intend to save the results. - In the config files
/configs/config_full-model_shrec*.yaml, you can optionally modifyEXPERIMENT_NAMEin line 1. Models and log files will finally be written toos.path.join(cfg.BASE_DIRECTORY, cfg.EXPERIMENT_NAME). - Navigate to the
maindirectory and executepython train.py --config-file "../configs/config_full-model_shrec*.yaml" --gpu GPUto train our full model on the Shrec'17 dataset. * should be either 14 or 28 depending on the protocol you want to train on. - After each epoch, we save the model weights and a log file to the specified directory.
The procedure is analogous to the Shrec dataset.
Just navigate to the main directory and execute python train_dhg.py --config-file "../configs/config_full-model_dhg28.yaml" --gpu GPU to train our full model on the DHG28 dataset.
Note that we perform leave-one-fold-out cross-validation, i.e. this will run 20 successive trainings.
- If you trained a model yourself following the instructions above, you can test the model by executing
python test.py --config-file "../configs/config_full-model_shrec*.yaml" --gpu GPU. The output comprises recognition accuracy, number of model parameters and inference time averaged over all batches. - Otherwise, we provide pre-trained models for Shrec 14G protocol and for Shrec 28G protocol. Download the models and use them for inference by executing
python test.py --config-file "../configs/config_full-model_shrec*.yaml" --gpu GPU --model-path PATH/TO/MODEL. The provided models achieve 95.24% under the 28G protocol and 96.43% under the 14G protocol.
Again, the procedure is analogous to the Shrec dataset.
Once you finished training on the DHG dataset, you can test all models on the associated data split by executing python test.py --config-file "../configs/config_full-model_dhg28.yaml" --gpu GPU
If you find our code useful for your work, please cite the following paper
@inproceedings{bigalke2021fusing,
title={Fusing Posture and Position Representations for Point Cloud-Based Hand Gesture Recognition},
author={Bigalke, Alexander and Heinrich, Mattias P},
booktitle={2021 International Conference on 3D Vision (3DV)},
pages={617--626},
year={2021},
organization={IEEE}
}- Code for data pre-processing has been adapted from https://github.com/ycmin95/pointlstm-gesture-recognition-pytorch/tree/master/dataset
- DGCNN implementation has been adapted from https://github.com/AnTao97/dgcnn.pytorch
- PointNet++ implementation has been adapted from https://github.com/rusty1s/pytorch_geometric/blob/master/examples/pointnet2_classification.py
We thank all authors for sharing their code!