Skip to content

Latest commit

 

History

History
194 lines (136 loc) · 6.13 KB

File metadata and controls

194 lines (136 loc) · 6.13 KB

Physical AI Studio

Train and deploy Vision-Language-Action (VLA) models for robotic imitation learning

Key FeaturesQuick StartDocumentationContributing


What is Physical AI Studio?

Physical AI Studio is an end-to-end framework for teaching robots to perform tasks through imitation learning from human demonstrations.

Key Features

  • End-to-End Pipeline - From demonstration recording to robot deployment
  • State-of-the-Art Policies - Native policy implementations such as ACT, Pi0, SmolVLA, GR00T and Pi0.5, plus full LeRobot policy zoo
  • Flexible Interface - Use Python API, CLI, or GUI
  • Production Export - Deploy to OpenVINO, ONNX, or Torch for any hardware
  • Standardized Benchmarks - Evaluate on benchmarks such as LIBERO and PushT
  • Built on Lightning - PyTorch Lightning for distributed training, mixed precision, and more

Quick Start

Application (GUI)

For users who prefer a visual interface for end-to-end workflow:

Application demo

Application Documentation →

Docker

Run the full application (backend + UI) in a single container:

# Clone the repository
git clone https://github.com/open-edge-platform/physical-ai-studio.git
cd physical-ai-studio

# Setup and run docker services 
cd application/docker
cp .env.example .env
docker compose --profile xpu up # or use --profile cuda, --profile cpu

Application runs at http://localhost:7860. See the Docker README for hardware configuration (Intel XPU, NVIDIA CUDA) and device setup.

Native: installation & running

Run the application in development mode, using uv package manager and node v24 (we recommend using nvm)

# Clone the repository
git clone https://github.com/open-edge-platform/physical-ai-studio.git
cd physical-ai-studio

# Install and run backend
cd application/backend && uv sync --extra xpu # or --extra cpu, --extra cuda
./run.sh

# In a new terminal: install and run UI
cd application/ui 
nvm use
npm install
# Fetch the api from the backend and build the types and start the frontend.
npm run build:api:download && npm run build:api && npm run start

Open http://localhost:3000 in your browser.

Library (Python/CLI)

For programmatic control over training, benchmarking, and deployment with both API and CLI

pip install physicalai-train
Training
from physicalai.data import LeRobotDataModule
from physicalai.policies import ACT
from physicalai.train import Trainer

datamodule = LeRobotDataModule(repo_id="lerobot/aloha_sim_transfer_cube_human")
model = ACT()
trainer = Trainer(max_epochs=100)
trainer.fit(model=model, datamodule=datamodule)
Benchmark
from physicalai.benchmark import LiberoBenchmark
from physicalai.policies import ACT

policy = ACT.load_from_checkpoint("experiments/lightning_logs/version_0/checkpoints/last.ckpt")
benchmark = LiberoBenchmark(task_suite="libero_10", num_episodes=20)
results = benchmark.evaluate(policy)
print(f"Success rate: {results.aggregate_success_rate:.1f}%")
Export
from physicalai.export import get_available_backends
from physicalai.policies import ACT

# See available backends
print(get_available_backends())  # ['onnx', 'openvino', 'torch', 'torch_export_ir']

# Export to OpenVINO
policy = ACT.load_from_checkpoint("experiments/lightning_logs/version_0/checkpoints/last.ckpt")
policy.export("./policy", backend="openvino")
Inference
from physicalai.inference import InferenceModel

policy = InferenceModel.load("./policy")
obs, info = env.reset()
done = False

while not done:
    action = policy.select_action(obs)
    obs, reward, terminated, truncated, info = env.step(action)
    done = terminated or truncated
CLI Usage
# Train
physicalai fit --config configs/physicalai/act.yaml

# Evaluate
physicalai benchmark --config configs/benchmark/libero.yaml --ckpt_path model.ckpt

# Export (Python API only - CLI coming soon)
# Use: policy.export("./policy", backend="openvino")

Library Documentation →

Documentation

Resource Description
Library Docs API reference, guides, and examples
Application Docs GUI setup and usage
Contributing Contributing and development setup

Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.