Question
Description
I would like to implement a navigation + grasping task, similar to the example shown in the latest branch
However, I am not sure what the recommended workflow is for building such a task in Isaac Lab.
Specifically:
Should this type of task be trained using Reinforcement Learning, Mimic Learning, or a combination of both?
What is the recommended end-to-end pipeline for navigation + manipulation tasks in Isaac Lab?
Since the repository recently added the environment
Isaac-PickPlace-Locomanipulation-G1-Abs-v0,
I would like to use this as a concrete example to understand the intended workflow.
My Questions
What is the recommended approach for navigation + grasping tasks?
Pure RL?
Pure Mimic Learning (BC / Diffusion Policy)?
Teleoperation → Mimic Learning → RL fine-tuning?
Could you provide an overview of the expected pipeline using Isaac-PickPlace-Locomanipulation-G1-Abs-v0 as the example?
For example:
Environment configuration
Teleoperation / demo collection
Mimic training
RL fine-tuning
Evaluation
If navigation + grasping is intended to follow a standard pattern in Isaac Lab, is there any documentation or reference script?
Build Info
Describe the versions that you are currently using:
- Isaac Lab Version: [e.g. 2.3.0]
- Isaac Sim Version: [5.1]