Talk Through It: End User Directed Robot Learning
Carl Winge, Adam Imdieke, Bahaa Aldeeb, Dongyeop Kang, Karthik Desingh
Talk through it is a framework for learning robot manipulation from natural language instructions. Given a factory model that can perform primitive actions, users can instruct the robot to perform more complex skills and tasks. We fine tune the factory model on saved recordings of the skills and tasks to create home models that can perform primitive actions as well as higher level skills and tasks.
Project website: talk-through-it.github.io
This repository started as a clone of PerAct. The requirements should be the same.
Please open an issue if you encounter problems.
mamba create -n talk
mamba activate talk
mamba install python=3.8Follow instructions from the official PyRep repo; reproduced here for convenience:
PyRep requires version 4.1 of CoppeliaSim. Download:
Once you have downloaded CoppeliaSim, you can pull PyRep from git:
git clone https://github.com/stepjam/PyRep.git
cd PyRepAdd the following to your ~/.bashrc file: (NOTE: the 'EDIT ME' in the first line)
export COPPELIASIM_ROOT=EDIT/ME/PATH/TO/COPPELIASIM/INSTALL/DIR
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$COPPELIASIM_ROOT
export QT_QPA_PLATFORM_PLUGIN_PATH=$COPPELIASIM_ROOT__Remember to source your bashrc (source ~/.bashrc) or
zshrc (source ~/.zshrc) after this.
Finally install the python library:
pip3 install -r requirements.txt
pip3 install .PerAct uses our RLBench fork.
cd <install_dir>
git clone -b peract https://github.com/RPM-lab-UMN/RLBench.git # note: 'peract' branch
cd RLBench
pip install -r requirements.txt
python setup.py developPerAct uses our YARR fork.
cd <install_dir>
git clone -b peract https://github.com/RPM-lab-UMN/YARR.git # note: 'peract' branch
cd YARR
pip install -r requirements.txt
python setup.py developClone:
cd <install_dir>
git clone https://github.com/RPM-lab-UMN/talk-through-it.gitInstall:
cd talk-through-it
pip install git+https://github.com/openai/CLIP.git
mamba install einops pytorch3d transformersGenerate Level-1 motions data using RLBench/tools/dataset_generator.py
Train the observation-dependent model by editing conf/config.yaml and running train.py
Train the observation-independent model by running train_l2a.py and train_classifier.py
Collect demonstrations using language by running record_model_1.py
Evaluate observation-dependent models by editing conf/eval.yaml and running eval.py
@ARTICLE{10608414,
author={Winge, Carl and Imdieke, Adam and Aldeeb, Bahaa and Kang, Dongyeop and Desingh, Karthik},
journal={IEEE Robotics and Automation Letters},
title={Talk Through It: End User Directed Manipulation Learning},
year={2024},
pages={1-8},
keywords={Robots;Task analysis;Production facilities;Training;Natural languages;Grippers;Cognition;Learning from Demonstration;Incremental Learning;Human-Centered Robotics},
doi={10.1109/LRA.2024.3433309}
}