Install the requirements in requirements.txt by running
pip install -r requirements.txtClone the repo https://github.com/souljaboy764/phd_utils and follow the installation instructions in its README. This repository has the datasets to be used already preprocessed.
We use the same configuration for all of the Buetepage datasets, which can be trained using the following command:
python train_moveint.py --results path/to/buetepage_results --epochs 400 --num-components 3 --dataset DATASET --hidden-sizes 40 20 --latent-dim 5Where DATASET is one of BuetepageHH, BuetepagePepper or BuetepageYumi (make sure the names match with the class names in dataset.py).
Once the models are trained on the Buetepage dataset, we initialize the models for the NuiSI datasets with the pretrained weights trained on the Buetepage dataset.
python train_moveint.py --results path/to/nuisi_results --ckpt path/to/buetepage_checkpoint.pth --epochs 400 --num-components 3 --dataset DATASET --hidden-sizes 40 20 --latent-dim 5For the comparison in the paper, we use a mix of both bimanual and unimanual robot-to-human handovers. This is in the HandoverHH class in dataset.py where no scaling is applied to the data.
python train_moveint.py --results path/to/handover_hh_results --epochs 400 --num-components 3 --dataset HandoverHH --hidden-sizes 80 40 --latent-dim 10For executing handover behaviours on the Kobo robot, the class HandoverKobo should be given as an argument instead of HandoverHH in the above command.
For the model that performs unimanual handovers with the Pepper, the following should be run:
python train_moveint.py --results path/to/handover_pepper_results --epochs 400 --num-components 3 --dataset UnimanualPepper --hidden-sizes 40 20 --latent-dim 5The output of the below testing code is the Mean squared prediction error and standard deviation for each interaction in the dataset of the model that is being evaluated. To run the testing, simply run:
python test_moveint.py --ckpt /path/to_ckptIf you used this repo to help your work, please cite our paper:
@article{prasad2024moveint,
title={MoVEInt: Mixture of Variational Experts for Learning Human-Robot Interactions from Demonstrations},
author={Prasad, Vignesh and Kshirsagar, Alap and Stock-Homburg, Dorothea Koert Ruth and Peters, Jan and Chalvatzaki, Georgia},
journal={IEEE Robotics and Automation Letters},
year={2024},
publisher={IEEE}
}