Skip to content

Commit 8b7a6bc

Browse files
blog(lerobot): NVIDIA collab healthcare blog (#3154)
1 parent a22fe81 commit 8b7a6bc

File tree

3 files changed

+198
-0
lines changed

3 files changed

+198
-0
lines changed

_blog.yml

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6808,6 +6808,15 @@
68086808
- lerobot
68096809
- robotics
68106810

6811+
- local: lerobotxnvidia-healthcare
6812+
title: "Building a Healthcare Robot from Simulation to Deployment with NVIDIA Isaac"
6813+
author: imstevenpmwork
6814+
thumbnail: /blog/assets/lerobotxnvidia-healthcare/thumbnail.png
6815+
date: Oct 29, 2025
6816+
tags:
6817+
- lerobot
6818+
- robotics
6819+
68116820
- local: gpt-oss-on-intel-xeon
68126821
title: "Google Cloud C4 Brings a 70% TCO improvement on GPT OSS with Intel and Hugging Face"
68136822
author: Jiqing
190 KB
Loading

lerobotxnvidia-healthcare.md

Lines changed: 189 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,189 @@
1+
---
2+
title: "Building a Healthcare Robot from Simulation to Deployment with NVIDIA Isaac"
3+
thumbnail: /blog/assets/lerobotxnvidia-healthcare/thumbnail.png
4+
authors:
5+
- user: imstevenpmwork
6+
- user: diazandr3s
7+
---
8+
9+
# Building a Healthcare Robot from Simulation to Deployment with NVIDIA Isaac
10+
11+
## TL;DR
12+
A hands-on guide to collecting data, training policies, and deploying autonomous medical robotics workflows on real hardware
13+
14+
## Table-of-Contents
15+
- [Building a Healthcare Robot from Simulation to Deployment with NVIDIA Isaac](#building-a-healthcare-robot-from-simulation-to-deployment-with-nvidia-isaac)
16+
- [TL;DR](#tldr)
17+
- [Table-of-Contents](#table-of-contents)
18+
- [Introduction](#introduction)
19+
- [SO-ARM Starter Workflow; Building an Embodied Surgical Assistant](#so-arm-starter-workflow-building-an-embodied-surgical-assistant)
20+
- [Technical Implementation](#technical-implementation)
21+
- [Sim2Real Mixed Training Approach](#sim2real-mixed-training-approach)
22+
- [Hardware Requirements](#hardware-requirements)
23+
- [Data Collection Implementation](#data-collection-implementation)
24+
- [Simulation Teleoperation Controls](#simulation-teleoperation-controls)
25+
- [Model Training Pipeline](#model-training-pipeline)
26+
- [End-to-End Sim Collect–Train–Eval Pipelines](#end-to-end-sim-collecttraineval-pipelines)
27+
- [Generate Synthetic Data in Simulation](#generate-synthetic-data-in-simulation)
28+
- [Train and Evaluate Policies](#train-and-evaluate-policies)
29+
- [Convert Models to TensorRT](#convert-models-to-tensorrt)
30+
- [Getting Started](#getting-started)
31+
- [Resources](#resources)
32+
33+
## Introduction
34+
35+
Simulation has been a cornerstone in medical imaging to address the data gap. However, in healthcare robotics until now, it's often been too slow, siloed, or difficult to translate into real-world systems.
36+
37+
NVIDIA Isaac for Healthcare, a developer framework for AI healthcare robotics, enables healthcare robotics developers in solving these challenges via offering integrated data collection, training, and evaluation pipelines that work across both simulation and hardware. Specifically, the Isaac for Healthcare v0.4 release provides healthcare developers with an end-to-end [SO - ARM based starter workflow](https://github.com/isaac-for-healthcare/i4h-workflows/blob/main/workflows/so_arm_starter/README.md) and [the bring your own operating room tutorial](https://github.com/isaac-for-healthcare/i4h-workflows/blob/main/tutorials/assets/bring_your_own_or/README.md). The SO-ARM starter workflow lowers the barrier for MedTech developers to experience the full workflow from simulation to train to deployment and start building and validating autonomous on real hardware right away.
38+
39+
In this post, we'll walk through the starter workflow and its technical implementation details to help you build a surgical assistant robot in less time than ever imaginable before.
40+
41+
## SO-ARM Starter Workflow; Building an Embodied Surgical Assistant
42+
43+
The SO-ARM starter workflow introduces a new way to explore surgical assistance tasks, and providing developers with a complete end-to-end pipeline for autonomous surgical assistance:
44+
45+
* Collect real-world and synthetic data with SO-ARM using the LeRobot
46+
* Fine-tune GR00t N1.5, evaluate in IsaacLab, then deploy to hardware
47+
48+
This workflow gives developers a safe, repeatable environment to train and refine assistive skills before moving into the Operating Room.
49+
50+
### Technical Implementation
51+
52+
The workflow implements a three-stage pipeline that integrates simulation and real hardware:
53+
54+
1. Data Collection: Mixed simulation and real-world teleoperation demonstrations using using SO101 and LeRobot
55+
2. Model Training: Fine-tuning GR00T N1.5 on combined datasets with dual-camera vision
56+
3. Policy Deployment: Real-time inference on physical hardware with RTI DDS communication
57+
58+
Notably, over 93% of the data used for policy training was generated synthetically in simulation, underscoring the strength of simulation in bridging the robotic data gap.
59+
60+
### Sim2Real Mixed Training Approach
61+
62+
The workflow combines simulation and real-world data to address the fundamental challenge that training robots in the real world is expensive and limited, while pure simulation often fails to capture real-world complexities. The approach uses approximately 70 simulation episodes for diverse scenarios and environmental variations, combined with 10-20 real-world episodes for authenticity and grounding. This mixed training creates policies that generalize beyond either domain alone.
63+
64+
### Hardware Requirements
65+
66+
The workflow requires:
67+
68+
* GPU: RT Core-enabled architecture (Ampere or later) with ≥30GB VRAM for GR00TN1.5 inference
69+
* SO-ARM101 Follower: 6-DOF precision manipulator with dual-camera vision (wrist and room). The SO-ARM101 features WOWROBO vision components, including a wrist-mounted camera with a 3D-printed adapter
70+
* SO-ARM101 Leader: 6-DOF Teleoperation interface for expert demonstration collection
71+
72+
Notably, developers could run all the simulation, training and deployment (3 computers needed for physical AI) on one [DGX Spark](https://www.nvidia.com/en-us/products/workstations/dgx-spark/).
73+
74+
### Data Collection Implementation
75+
76+
![so100-healthcare-real-demo](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot-blog/nvidia-healthcare/lerobotxnvidia-healthcare-real-demo.gif)
77+
78+
For real-world data collection with SO-ARM101 hardware or any other version supported in LeRobot:
79+
80+
```bash
81+
python lerobot-record \
82+
--robot.type=so101_follower \
83+
--robot.port=<follower_port_id> \
84+
--robot.cameras="{wrist: {type: opencv, index_or_path: 0, width: 640, height: 480, fps: 30}, room: {type: opencv, index_or_path: 2, width: 640, height: 480, fps: 30}}" \
85+
--robot.id=so101_follower_arm \
86+
--teleop.type=so101_leader \
87+
--teleop.port=<leader_port_id> \
88+
--teleop.id=so101_leader_arm \
89+
--dataset.repo_id=<user>/surgical_assistance/surgical_assistance \
90+
--dataset.num_episodes=15 \
91+
--dataset.single_task="Prepare and hand surgical instruments to surgeon"
92+
```
93+
94+
For simulation-based data collection:
95+
96+
![so100-healthcare-sim-demo](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot-blog/nvidia-healthcare/lerobotxnvidia-healthcare-sim-demo.gif)
97+
98+
99+
```bash
100+
# With keyboard teleoperation
101+
python -m simulation.environments.teleoperation_record \
102+
--enable_cameras \
103+
--record \
104+
--dataset_path=/path/to/save/dataset.hdf5 \
105+
--teleop_device=keyboard
106+
107+
# With SO-ARM101 leader arm
108+
python -m simulation.environments.teleoperation_record \
109+
--port=<your_leader_arm_port_id> \
110+
--enable_cameras \
111+
--record \
112+
--dataset_path=/path/to/save/dataset.hdf5
113+
```
114+
115+
### Simulation Teleoperation Controls
116+
117+
For users without physical SO-ARM101 hardware, the workflow provides keyboard-based teleoperation with the following joint controls:
118+
119+
* Joint 1 (shoulder_pan): Q (+) / U (-)
120+
* Joint 2 (shoulder_lift): W (+) / I (-)
121+
* Joint 3 (elbow_flex): E (+) / O (-)
122+
* Joint 4 (wrist_flex): A (+) / J (-)
123+
* Joint 5 (wrist_roll): S (+) / K (-)
124+
* Joint 6 (gripper): D (+) / L (-)
125+
* R Key: Reset recording environment
126+
* N Key: Mark episode as successful
127+
128+
### Model Training Pipeline
129+
130+
After collecting both simulation and real-world data, convert and combine datasets for training:
131+
132+
```bash
133+
# Convert simulation data to LeRobot format
134+
python -m training.hdf5_to_lerobot \
135+
--repo_id=surgical_assistance_dataset \
136+
--hdf5_path=/path/to/your/sim_dataset.hdf5 \
137+
--task_description="Autonomous surgical instrument handling and preparation"
138+
139+
# Fine-tune GR00T N1.5 on mixed dataset
140+
python -m training.gr00t_n1_5.train \
141+
--dataset_path /path/to/your/surgical_assistance_dataset \
142+
--output_dir /path/to/surgical_checkpoints \
143+
--data_config so100_dualcam
144+
```
145+
146+
The trained model processes natural language instructions such as "Prepare the scalpel for the surgeon" or "Hand me the forceps" and executes the corresponding robotic actions. With LeRobot latest release (0.4.0) you will be able to fine-tune Gr00t N1.5 natively in LeRobot!
147+
148+
## End-to-End Sim Collect–Train–Eval Pipelines
149+
150+
Simulation is most powerful when it's part of a loop: collect → train → evaluate → deploy.
151+
152+
With v0.3, IsaacLab supports this full pipeline:
153+
154+
### Generate Synthetic Data in Simulation
155+
156+
* Teleoperate robots using keyboard or hardware controllers
157+
* Capture multi-camera observations, robot states, and actions
158+
* Create diverse datasets with edge cases impossible to collect safely in real environments
159+
160+
### Train and Evaluate Policies
161+
162+
* Deep integration with Isaac Lab's RL framework for PPO training
163+
* Parallel environments (thousands of simulations simultaneously)
164+
* Built-in trajectory analysis and success metrics
165+
* Statistical validation across varied scenarios
166+
167+
### Convert Models to TensorRT
168+
169+
* Automatic optimization for production deployment
170+
* Support for dynamic shapes and multi-camera inference
171+
* Benchmarking tools to verify real-time performance
172+
173+
This reduces time from experiment to deployment and makes sim2real a practical part of daily development.
174+
175+
## Getting Started
176+
177+
Isaac for Healthcare SO-ARM Starter Workflow is available now. To get started:
178+
179+
1. Clone the repository: `git clone https://github.com/isaac-for-healthcare/i4h-workflows.git`
180+
2. Choose a workflow: Start with the SO-ARM Starter Workflow for surgical assistance or explore other workflows
181+
3. Run the setup: Each workflow includes an automated setup script (e.g., `tools/env_setup_so_arm_starter.sh`)
182+
183+
### Resources
184+
185+
* [GitHub Repository](https://github.com/isaac-for-healthcare/i4h-workflows): Complete workflow implementations
186+
* [Documentation](https://isaac-for-healthcare.github.io/i4h-docs/): Setup and usage guides
187+
* [GR00T Models](https://huggingface.co/nvidia/GR00T-N1.5-3B): Pre-trained foundation models
188+
* [Hardware Guides](https://huggingface.co/docs/lerobot/so101): SO-ARM101 setup instructions
189+
* [LeRobot Repository](https://github.com/huggingface/lerobot): End-to-end robotics learning

0 commit comments

Comments
 (0)