A task library is a collection of task definitions organized into subfolders under a common root directory. RoboLab ships a built-in library in robolab/tasks/, but you can create and maintain your own task library in a separate repository.
To build a task library, write Task dataclasses with USD scenes, language instructions, and termination criteria. See Creating Tasks for the full authoring guide. Once you have tasks, see Environment Registration for how to register and run them.
The rest of this page covers how to organize your tasks into a library and use RoboLab's utility scripts to generate metadata and statistics.
A task library follows this layout:
my_task_library/
scenes/
scene_a.usda
scene_b.usda
tasks/
apple_in_bowl.py
banana_on_plate.py
block_stacking.py
_metadata/ # Auto-generated by the metadata scripts
task_metadata.json
task_table.csv
scenes/— USD scene files used by your tasks (see Scenes)tasks/— Task definition Python files, one per task_metadata/— Auto-generated metadata (JSON, CSV, markdown table)
Each task file defines a Task dataclass that binds a scene to language instructions and termination criteria. See Creating Tasks for the full authoring guide.
Once you have task files, register them as runnable Gymnasium environments so they can be instantiated by your evaluation script. Your library does not need to live inside the RoboLab repo. See Environment Registration for the full workflow.
RoboLab provides utility scripts for generating metadata, README tables, and statistics for your task library. These live in robolab/tasks/_utils/ and work with any task directory — you point them at your own library the same way RoboLab uses them internally.
| Script | Purpose |
|---|---|
generate_task_metadata.py |
Scan task files and produce JSON, CSV, and a README table |
compute_task_statistics.py |
Analyze metadata: attributes, difficulty, objects, subtasks, episodes, scenes |
load_task_info.py |
Importable helpers for extracting metadata from task classes programmatically |
This scans all task files, extracts metadata (instructions, attributes, scenes, subtasks, difficulty scores), and writes structured output files:
python robolab/tasks/_utils/generate_task_metadata.py \
--tasks-folder /path/to/my_task_library/tasks \
--output-folder /path/to/my_task_library/tasks/_metadataOutput files:
_metadata/task_metadata.json— Complete metadata for all tasks_metadata/task_table.csv— Tabular summarytasks/README.md— Markdown table (auto-generated at the root of the tasks folder)
Once metadata has been generated, you can view statistics without re-scanning:
# Summary of task attributes and difficulty distribution
python robolab/tasks/_utils/compute_task_statistics.py \
--metadata-file /path/to/my_task_library/tasks/_metadata/task_metadata.json
# Full report (attributes, objects, subtasks, episodes, scenes)
python robolab/tasks/_utils/compute_task_statistics.py \
--metadata-file /path/to/my_task_library/tasks/_metadata/task_metadata.json \
--verbose
# Individual analysis sections
python robolab/tasks/_utils/compute_task_statistics.py --objects # Object frequency
python robolab/tasks/_utils/compute_task_statistics.py --subtasks # Subtask complexity
python robolab/tasks/_utils/compute_task_statistics.py --episodes # Episode lengths
python robolab/tasks/_utils/compute_task_statistics.py --difficulty # Difficulty scoring
python robolab/tasks/_utils/compute_task_statistics.py --by-scene # Tasks grouped by scenepython robolab/tasks/_utils/compute_task_statistics.py --verbose --saveThis writes _metadata/task_report.txt alongside the JSON and CSV files.
To check that all tasks have valid contact lists, terminations, and scene references:
# Validate your own task library
python scripts/check_tasks_valid.py --tasks-folder /path/to/my_task_library/tasks
# Validate the built-in benchmark tasks (default)
python scripts/check_tasks_valid.pyRe-run generate_task_metadata.py whenever you add, remove, or modify tasks. This regenerates all output files (JSON, CSV, README) so they stay in sync with your task definitions.
A typical workflow when updating your task library:
- Add or edit task files — see Creating Tasks for the authoring guide
- Regenerate metadata —
python robolab/tasks/_utils/generate_task_metadata.py --tasks-folder <your_tasks_dir> - Review statistics —
python robolab/tasks/_utils/compute_task_statistics.py --metadata-file <your_metadata_dir>/task_metadata.json --verbose - Register and test — follow Environment Registration to register your updated tasks and run them
- Creating Tasks — How to write task definitions (scenes, terminations, instructions, subtasks)
- Benchmark — The built-in benchmark task library and its difficulty scoring
- Environment Registration — Registering tasks as runnable Gymnasium environments
- Evaluating a New Policy — Running evaluation against your tasks
robolab/tasks/_utils/README.md— Quick reference for all utility scripts