Skip to content

allenai/vla-evaluation-harness

Repository files navigation

vla-evaluation-harness

CI pypi License: Apache 2.0 Python 3.8+ Ruff Docker Images

Benchmarks LIBERO SimplerEnv CALVIN ManiSkill2 LIBERO-Pro RoboCasa VLABench MIKASA-Robo RoboTwin RLBench RoboCerebra LIBERO-Mem BEHAVIOR-1K Kinetix FurnitureBench
Models (official) OpenVLA π₀ π₀-FAST GR00T N1.6 OFT X-VLA CogACT RTC MemVLA
Models (dexbotic) stars DB-CogACT
Models (starVLA) stars QwenGR00T QwenOFT QwenPI QwenFAST

One framework to evaluate any VLA model on any robot simulation benchmark.

Why vla-evaluation-harness?

Batch Parallel Evaluation Episode sharding + batched GPU inference → 47× throughput (2 000 LIBERO episodes in 18 min on 1× H100). Details
Zero Setup Benchmarks in Docker, model servers as single-file uv scripts — no dependency conflicts.
AI-Assisted Integration Built-in Claude Code skills for adding benchmarks and model servers — scaffold new integrations in minutes, not hours.

Motivation

VLA models are evaluated on LIBERO, CALVIN, SimplerEnv, ManiSkill, and others — but each benchmark has its own dependencies, observation format, and evaluation protocol. In practice, every research team ends up maintaining private eval forks per benchmark. Results diverge. Bug fixes don't propagate. No one tests under real-time conditions where the environment keeps moving during inference.

vla-evaluation-harness integrates the model once, integrates the benchmark once, and the full cross-evaluation matrix fills itself.

How: our abstraction layer fully decouples models from benchmarks.

  • Benchmarks run inside Docker — no dependency hell, exact reproducibility.
  • Model servers are standalone uv scripts with inline dependency declarations — zero manual setup.

See Architecture for how the pieces connect.


Installation

pip install vla-eval

Or from source:

git clone https://github.com/allenai/vla-evaluation-harness.git
cd vla-evaluation-harness
uv sync --python 3.11 --all-extras --dev

Quick Start

Two terminals: one for the model server (GPU), one for the benchmark client.

# Terminal 1 — model server (runs on host with GPU)
vla-eval serve --config configs/model_servers/dexbotic_cogact_libero.yaml

# Terminal 2 — run evaluation (benchmark runs in Docker by default)
vla-eval run --config configs/libero_smoke_test.yaml

Results are saved to results/ as JSON. The benchmark runs inside Docker by default — pass --no-docker for local development.

For full evaluation (10 tasks × 50 episodes):

vla-eval run --config configs/libero_spatial.yaml

See Reproduction Reports for verified scores and per-model details.

Need faster runs? See Batch Parallel Evaluation2 000 LIBERO episodes in ~18 min (47× vs sequential).


Batch Parallel Evaluation

A full evaluation takes hours sequentially. Two layers of parallelism bring this down to minutes:

Wall-clock evaluation time: sequential vs batch parallel across LIBERO (47×), CALVIN (16×), SimplerEnv (12×)

Episode sharding splits (task, episode) pairs across N independent processes (RFC-0006). Each shard connects to the same model server, where a BatchPredictModelServer batches their inference requests into a single forward pass. The two axes multiply together.

Episode Sharding (environment parallelism)

# Option A: use the helper script (launches all shards + auto-merges)
./scripts/run_sharded.sh -c configs/libero_spatial.yaml -n 50

# Option B: manual launch
vla-eval run -c configs/libero_spatial.yaml --shard-id 0 --num-shards 4 &
vla-eval run -c configs/libero_spatial.yaml --shard-id 1 --num-shards 4 &
# ... (each shard is a separate process)
wait
vla-eval merge -c configs/libero_spatial.yaml -o results/libero_spatial.json

Each shard gets a deterministic slice via round-robin. Results merge with episode-level deduplication — if a shard fails, re-run only that shard.

Batch Model Server (GPU parallelism)

Enable batching in the model server config by setting max_batch_size > 1:

args:
  max_batch_size: 16    # max observations per GPU forward pass (>1 enables batching)
  max_wait_time: 0.05   # seconds to wait before dispatching a partial batch

Tuning & Combined Effect

We tune parallelism via a demand/supply methodology: demand λ(N) measures environment throughput as a function of shards, supply μ(B) measures model throughput as a function of batch size. The operating point satisfies λ(N) < 80% · μ(B*) to prevent queue buildup.

Demand/supply throughput for LIBERO + CogACT on H100

Sharding and batching multiply together (DB-CogACT 7B, LIBERO Spatial, 1× H100-80GB):

Sequential Batch Parallel (50 shards, B=16)
Wall-clock ~14 h ~18 min
Throughput ~11 obs/s ~486 obs/s

2 000 episodes, 47× faster. The included benchmarking tools (experiments/bench_demand.py, experiments/bench_supply.py) measure λ and μ for any model + benchmark combination. See the Tuning Guide for worked examples and max_wait_time derivation.


Docker Images

All benchmark environments are packaged as standalone Docker images based on base.

Image Size Benchmark Python Base
base 3.3 GB 3.10 nvidia/cuda:12.1.1-runtime-ubuntu22.04
rlbench 4.7 GB RLBench 3.8 base
simpler 4.9 GB SimplerEnv 3.10 base
libero 6.0 GB LIBERO 3.8 base
libero-pro 6.2 GB LIBERO-Pro 3.8 base
robocerebra 6.3 GB RoboCerebra 3.8 base
calvin 9.5 GB CALVIN 3.8 base
kinetix 9.5 GB Kinetix 3.11 base
maniskill2 9.8 GB ManiSkill2 3.10 base
mikasa-robo 10.1 GB MIKASA-Robo 3.10 base
libero-mem 11.3 GB LIBERO-Mem 3.8 base
vlabench 17.7 GB VLABench 3.10 base
robotwin 28.6 GB RoboTwin 2.0 3.10 base
robocasa 35.6 GB RoboCasa 3.11 base

Pull (recommended):

docker pull ghcr.io/allenai/vla-evaluation-harness/libero:latest

Build locally (see docker/build.sh):

docker/build.sh          # build all (base first, then benchmarks)
docker/build.sh libero   # build one

Documentation

Document Description
Architecture Component descriptions, protocol, episode flow, configuration
Contributing Dev setup, adding benchmarks/models, PR workflow
Reproduction Reports Per-model evaluation results and reproducibility verdicts
RFCs Design proposals with rationale and status tracking
Design Philosophy Freshness, Convenience, Layered Abstraction, Quality, Reproducibility, Openness

Contributing

See CONTRIBUTING.md for dev setup and PR workflow.

PRs for any 🔜 item in the support matrix are welcome.


Citation

If you find this work useful, please cite:

@article{choi2026vlaeval,
  title={vla-eval: A Unified Evaluation Harness for Vision-Language-Action Models},
  author={Choi, Suhwan and Lee, Yunsung and Park, Yubeen and Kim, Chris Dongjoo and Krishna, Ranjay and Fox, Dieter and Yu, Youngjae},
  journal={arXiv preprint arXiv:2603.13966},
  year={2026}
}

License

Apache 2.0

About

One framework to evaluate any VLA model on any robot simulation benchmark.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors