Skip to content

This repo holds the code for our 3DV 2021 paper "Fusing Posture and Position Representations for Point Cloud-Based Hand Gesture Recognition"

License

Notifications You must be signed in to change notification settings

multimodallearning/hand-gesture-posture-position

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

hand-gesture-posture-position

Source code for our 3DV 2021 paper Fusing Posture and Position Representations for Point Cloud-based Hand Gesture Recognition [pdf].

Dependencies

Please first install the following dependencies

  • Python3 (we use 3.8.3)
  • numpy
  • pytorch (we use 1.6.0)
  • bps
  • yacs
  • cv2
  • sklearn
  • scikit-image
  • imageio
  • pytorch-geometric (only for ablation experiment with PointNet++)

Data Preparation

Shrec

  1. Download the Shrec'17 dataset from http://www-rech.telecom-lille.fr/shrec2017-hand/. Create a directory /dataset/shrec17 and move HandGestureDataset_SHREC2017 to this directory. We recommend to create a symlink.
  2. Create a directory /dataset/shrec17/Processed_HandGestureDataset_SHREC2017. Subsequently, execute cd data and python shrec17_process.pyto generate point cloud sequences from the original depth images.

DHG

  1. Download the DHG dataset from http://www-rech.telecom-lille.fr/DHGdataset/. Create a directory /dataset/DHG/raw and move the extracted dataset to this directory. We recommend to create a symlink.
  2. Create a directory /dataset/DHG/processed. Subsequently, execute cd data and python dhg_process.pyto generate point cloud sequences from the original depth images.

Training

Shrec

  1. In /configs/defaults.py, modify _C.BASE_DIRECTORY in line 5 to the root directory where you intend to save the results.
  2. In the config files /configs/config_full-model_shrec*.yaml, you can optionally modify EXPERIMENT_NAMEin line 1. Models and log files will finally be written to os.path.join(cfg.BASE_DIRECTORY, cfg.EXPERIMENT_NAME).
  3. Navigate to the main directory and execute python train.py --config-file "../configs/config_full-model_shrec*.yaml" --gpu GPU to train our full model on the Shrec'17 dataset. * should be either 14 or 28 depending on the protocol you want to train on.
  4. After each epoch, we save the model weights and a log file to the specified directory.

DHG

The procedure is analogous to the Shrec dataset. Just navigate to the main directory and execute python train_dhg.py --config-file "../configs/config_full-model_dhg28.yaml" --gpu GPU to train our full model on the DHG28 dataset. Note that we perform leave-one-fold-out cross-validation, i.e. this will run 20 successive trainings.

Testing

Shrec

  • If you trained a model yourself following the instructions above, you can test the model by executing python test.py --config-file "../configs/config_full-model_shrec*.yaml" --gpu GPU. The output comprises recognition accuracy, number of model parameters and inference time averaged over all batches.
  • Otherwise, we provide pre-trained models for Shrec 14G protocol and for Shrec 28G protocol. Download the models and use them for inference by executing python test.py --config-file "../configs/config_full-model_shrec*.yaml" --gpu GPU --model-path PATH/TO/MODEL. The provided models achieve 95.24% under the 28G protocol and 96.43% under the 14G protocol.

DHG

Again, the procedure is analogous to the Shrec dataset. Once you finished training on the DHG dataset, you can test all models on the associated data split by executing python test.py --config-file "../configs/config_full-model_dhg28.yaml" --gpu GPU

Citation

If you find our code useful for your work, please cite the following paper

@inproceedings{bigalke2021fusing,
  title={Fusing Posture and Position Representations for Point Cloud-Based Hand Gesture Recognition},
  author={Bigalke, Alexander and Heinrich, Mattias P},
  booktitle={2021 International Conference on 3D Vision (3DV)},
  pages={617--626},
  year={2021},
  organization={IEEE}
}

Acknowledgements

We thank all authors for sharing their code!

About

This repo holds the code for our 3DV 2021 paper "Fusing Posture and Position Representations for Point Cloud-Based Hand Gesture Recognition"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages