Skip to content

Latest commit

Β 

History

History
59 lines (44 loc) Β· 4.76 KB

File metadata and controls

59 lines (44 loc) Β· 4.76 KB

RoboLab Documentation

How RoboLab Works

RoboLab dynamically combines tasks with user-specified robot, observations, actions, and simulation parameters at environment registration time. The core concepts are:

  • Objects β€” USD object assets with physics properties for manipulation
  • Scenes β€” USD-based environments containing objects, fixtures, and spatial layout
  • Tasks β€” Language instructions, termination criteria, and scene bindings
  • Subtask Checking β€” Granular progress tracking within tasks
  • Conditionals β€” Predicate logic for defining success/failure conditions
  • Event Tracking β€” Monitoring task-relevant events during execution
  • Task Libraries β€” Managing task collections, generating metadata, and viewing statistics
  • Robots β€” Robot articulation configs, actuators, and action spaces
  • Cameras β€” Scene cameras and robot-attached cameras
  • Lighting β€” Scene lighting (sphere, directional, and custom lights)
  • Backgrounds β€” HDR/EXR dome light backgrounds
  • Environment Registration β€” How tasks are combined with robot/observation/action configs into runnable Gymnasium environments
  • Environment Generation β€” Contact sensor creation, subtask trackers, and runtime environment internals
  • Inference Clients β€” Built-in policy clients and server setup instructions (OpenPI, GR00T)
  • Running Environments β€” Creating environments, evaluation scripts, CLI reference, and robustness testing
  • Data Storage and Output β€” Output directory structure, HDF5 layout, and episode result fields
  • Analysis and Results Parsing β€” Scripts for summarizing, comparing, and auditing experiment results

Development Workflow

If you're building a completely new benchmark and workflow, follow the steps below in order. Otherwise, pick whichever applies to your use case.

Creating new assets, tasks, and benchmarks

  1. Creating New Objects β€” Author USD object assets with rigid body, collision, and friction properties. Includes pipeline for catalog generation, screenshots, and physics tuning.
  2. Creating New Scenes β€” Compose objects into USD scenes using IsaacSim. Includes settling, metadata generation, and screenshot utilities.
  3. Creating New Tasks β€” Define task dataclasses with language instructions, termination criteria, and scene bindings. Tasks can live in your own repository.
  4. Managing Task Libraries β€” Organize tasks into collections, generate metadata (JSON, CSV, README), and compute statistics.

Configuring robots, cameras, lighting, and backgrounds

  • Robots β€” Define or customize robot articulation, actuators, and action spaces. Use built-in configs (DROID, Franka) or bring your own from IsaacLab.
  • Cameras β€” Set up scene cameras and robot-attached cameras (e.g., wrist cameras).
  • Lighting β€” Configure scene lighting for evaluation or robustness testing.
  • Backgrounds β€” Set HDR/EXR dome light backgrounds for realistic scene rendering.

Evaluating a new policy against the benchmark

  1. Setting Up Environment Registration β€” Register tasks with your robot/observation/action/simulation settings. For DROID with joint-position actions, the built-in registration can be used directly.
  2. Evaluating a New Policy β€” Implement an inference client for your model and run multi-task evaluation. Everything can live in your own separate repository.

AI Workflows

Running and debugging