RoboLab dynamically combines tasks with user-specified robot, observations, actions, and simulation parameters at environment registration time. The core concepts are:
- Objects β USD object assets with physics properties for manipulation
- Scenes β USD-based environments containing objects, fixtures, and spatial layout
- Tasks β Language instructions, termination criteria, and scene bindings
- Subtask Checking β Granular progress tracking within tasks
- Conditionals β Predicate logic for defining success/failure conditions
- Event Tracking β Monitoring task-relevant events during execution
- Task Libraries β Managing task collections, generating metadata, and viewing statistics
- Robots β Robot articulation configs, actuators, and action spaces
- Cameras β Scene cameras and robot-attached cameras
- Lighting β Scene lighting (sphere, directional, and custom lights)
- Backgrounds β HDR/EXR dome light backgrounds
- Environment Registration β How tasks are combined with robot/observation/action configs into runnable Gymnasium environments
- Environment Generation β Contact sensor creation, subtask trackers, and runtime environment internals
- Inference Clients β Built-in policy clients and server setup instructions (OpenPI, GR00T)
- Running Environments β Creating environments, evaluation scripts, CLI reference, and robustness testing
- Data Storage and Output β Output directory structure, HDF5 layout, and episode result fields
- Analysis and Results Parsing β Scripts for summarizing, comparing, and auditing experiment results
If you're building a completely new benchmark and workflow, follow the steps below in order. Otherwise, pick whichever applies to your use case.
- Creating New Objects β Author USD object assets with rigid body, collision, and friction properties. Includes pipeline for catalog generation, screenshots, and physics tuning.
- Creating New Scenes β Compose objects into USD scenes using IsaacSim. Includes settling, metadata generation, and screenshot utilities.
- Creating New Tasks β Define task dataclasses with language instructions, termination criteria, and scene bindings. Tasks can live in your own repository.
- Managing Task Libraries β Organize tasks into collections, generate metadata (JSON, CSV, README), and compute statistics.
- Robots β Define or customize robot articulation, actuators, and action spaces. Use built-in configs (DROID, Franka) or bring your own from IsaacLab.
- Cameras β Set up scene cameras and robot-attached cameras (e.g., wrist cameras).
- Lighting β Configure scene lighting for evaluation or robustness testing.
- Backgrounds β Set HDR/EXR dome light backgrounds for realistic scene rendering.
- Setting Up Environment Registration β Register tasks with your robot/observation/action/simulation settings. For DROID with joint-position actions, the built-in registration can be used directly.
- Evaluating a New Policy β Implement an inference client for your model and run multi-task evaluation. Everything can live in your own separate repository.
- Scene Generation β Generate USD scenes from natural language using the
/robolab-scenegenClaude Code skill. Seeskills/robolab-scenegen/. - Task Generation β Generate task files from natural language using the
/robolab-taskgenClaude Code skill. Seeskills/robolab-taskgen/.
- Running Environments β Creating environments, evaluation scripts, CLI reference, and robustness testing
- Data Storage and Output β Output directory structure, HDF5 layout, and episode result fields
- Analysis and Results Parsing β Scripts for summarizing, comparing, and auditing experiment results
- Debugging β Verbose/debug flags, world state inspection, and diagnostic scripts