The GG-CNN is a lightweight, fully-convolutional network which predicts the quality and pose of antipodal grasps at every pixel in an input depth image. The lightweight and single-pass generative nature of GG-CNN allows for fast execution and closed-loop control, enabling accurate grasping in dynamic environments where objects are moved during the grasp attempt.
This repository contains the implementation of the Generative Grasping Convolutional Neural Network (GG-CNN) from the paper:
Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach
Douglas Morrison, Peter Corke, Jürgen Leitner
Robotics: Science and Systems (RSS) 2018
If you use this work, please cite:
@article{morrison2018closing,
title={Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach},
author={Morrison, Douglas and Corke, Peter and Leitner, Jürgen},
booktitle={Robotics: Science and Systems (RSS)},
year={2018}
}
Contact
Any questions or comments contact Doug Morrison.
This code was developed with Python 2.7 on Ubuntu 16.04. Python requirements can be found in requirements.txt.
The pre-trained Keras model used in the RSS paper can be downloaded by running download_pretrained_ggcnn.sh in the data folder.
To train your own GG-CNN:
- Download the Cornell Grasping Dataset by running
download_cornell.shin thedatafolder. - Run
generate_dataset.pyto generate the manipulated dataset. Dataset creation settings can be specified ingenerate_dataset.py. - Specify the path to the
INPUT_DATASETintrain_ggcnn.py. - Run
train_ggcnn.py. - You can visualise the detected grasp outputs and evaluate against the ground-truth grasps of the Cornell Grasping Dataset by running
evaluate.py
Our ROS implementation for running the grasping system on a Kinva Mico arm can be found in the repository https://github.com/dougsm/ggcnn_kinova_grasping.