Official code of paper Active-Perceptive Language-Oriented Grasp Policy for Heavily Cluttered Scenes.
This code has been tested on Ubuntu20.04 with Python 3.9, Pytorch 2.3.1 and Cuda 11.8.
Create new Conda environment:
conda create -n apeg python=3.9
conda activate apegInstall torch 2.3.1 wit cuda 11.8:
pip install torch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 --index-url https://download.pytorch.org/whl/cu118Install pytorch3d (please refer to this guide):
pip install pytorch3dInstall maniskill (manually modified) and CustomTasks:
pip install -e .
cd CustomTasks
pip install -e .Install LocalGrasp requirements:
cd ../localgrasp
SKLEARN_ALLOW_DEPRECATED_SKLEARN_PACKAGE_INSTALL=True pip install -r requirements.txtInstall other requirements:
cd ../
pip install -r requirements.txtCheckpoints can be downloaded from Here.
Run demo code. To train the model, please set the parameter 'evaluate' to False and change the epi_ids.
cd ViLG/scripts
python main.pyPlease cite our paper in your publications if it helps your research.
@article{dai2025active,
title={Active-Perceptive Language-Oriented Grasp Policy for Heavily Cluttered Scenes},
author={Dai, Yixiang and Chen, Siang and Yang, Kaiqin and Hu, Dingchang and Xie, Pengwei and Li, Guosheng and Shen, Yuan and Wang, Guijin},
journal={IEEE Robotics and Automation Letters},
year={2025},
publisher={IEEE}
}
