RoboLab uses a server-client architecture: your model runs as a standalone server process, and RoboLab connects to it through a lightweight inference client during evaluation.
| Policy | Client Class | Protocol | Default Port | Dependencies |
|---|---|---|---|---|
| Pi0 / Pi0-fast / Pi05 | Pi0DroidJointposClient |
WebSocket (OpenPI) | 8000 | openpi-client |
| GR00T | GR00TDroidJointposClient |
ZMQ | 5555 | zmq, msgpack |
All clients live in robolab/inference/ and implement the InferenceClient base class:
from robolab.inference import InferenceClient
class InferenceClient(ABC):
def __init__(self, args) -> None: ...
def infer(self, obs, instruction) -> dict: ...
def reset(self): ...For writing your own inference client, see Evaluating a New Policy.
OpenPI uses a WebSocket-based policy server. The server runs separately (in its own environment) and RoboLab connects via the openpi-client package.
-
Clone
git@github.com:xuningy/openpi.gitand follow install instructions there. Do not install OpenPI in the same virtual environment as RoboLab — it runs separately. -
Install the OpenPI client in the RoboLab environment:
cd robolab uv pip install -e ../openpi/packages/openpi-client
Open a separate terminal and launch the server. We set XLA_PYTHON_CLIENT_MEM_FRACTION to 50% to avoid JAX consuming all GPU memory.
Pi05:
XLA_PYTHON_CLIENT_MEM_FRACTION=0.5 uv run scripts/serve_policy.py policy:checkpoint \
--policy.config=pi05_droid_jointpos \
--policy.dir=gs://openpi-assets-simeval/pi05_droid_jointposPi0-fast:
XLA_PYTHON_CLIENT_MEM_FRACTION=0.5 uv run scripts/serve_policy.py policy:checkpoint \
--policy.config=pi0_fast_droid_jointpos \
--policy.dir=gs://openpi-assets-simeval/pi0_fast_droid_jointposPi0:
XLA_PYTHON_CLIENT_MEM_FRACTION=0.5 uv run scripts/serve_policy.py policy:checkpoint \
--policy.config=pi0_droid_jointpos \
--policy.dir=gs://openpi-assets-simeval/pi0_droid_jointposPaliGemma Binning:
XLA_PYTHON_CLIENT_MEM_FRACTION=0.5 uv run scripts/serve_policy.py policy:checkpoint \
--policy.config=paligemma_binning_droid_jointpos \
--policy.dir=gs://openpi-assets-simeval/paligemma_binning_droid_jointposcd robolab
uv run python examples/policy/run_eval.py --policy pi05 --headlessThe default connection is localhost:8000. To change:
uv run python examples/policy/run_eval.py --policy pi05 --remote-host <HOST> --remote-port <PORT>RoboLab ships a built-in GR00T inference client (robolab/inference/gr00t.py) that communicates via ZMQ.
-
Make sure your
CUDA_HOMEandPATHis adequately set in your.bashrc. Otherwise, set it explicitly:export CUDA_HOME=/usr/local/cuda-12.4 export PATH=/usr/local/cuda-12.4/bin:$PATH
-
Clone and install:
git clone --recurse-submodules https://github.com/nadunRanawaka1/Isaac-GR00T-n16-droid.git cd Isaac-GR00T-n16-droid git checkout fa1fd91f4798e333b7cd1e9d5a32fe55f105a16b uv sync --python 3.10 uv pip install -e .
-
Download the model checkpoint oss-droid-v0.zip and unzip.
uv run python gr00t/eval/run_gr00t_server.py \
--model-path /path/to/oss-droid-v0/checkpoint-25000 \
--embodiment-tag OXE_DROID_JOINT_POSITION_RELATIVE \
--use-sim-policy-wrapper \
--host 0.0.0.0 --port 5555cd robolab
uv run python examples/policy/run_eval.py --policy gr00t --remote-host 0.0.0.0 --remote-port 5555 --headlessFor the full CLI reference, see Running Environments.
# Run on all benchmark tasks headlessly
uv run python examples/policy/run_eval.py --policy <policy> --headless
# Run on a specific task
uv run python examples/policy/run_eval.py --policy <policy> --task BananaInBowlTask
# Run on a tag of tasks
uv run python examples/policy/run_eval.py --policy <policy> --tag pick_place
# Run multiple runs per task (total episodes = num_runs * num_envs)
uv run python examples/policy/run_eval.py --policy <policy> --headless --num-runs 5 --num_envs 2
# Resume a previous run
uv run python examples/policy/run_eval.py --policy <policy> --headless --output-folder-name my_previous_run
# Enable subtask checking
uv run python examples/policy/run_eval.py --policy <policy> --headless --enable-subtask