Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
62 commits
Select commit Hold shift + click to select a range
282626d
Create initial GR00T-N1.6 directory structure
AravindhShan-nv Dec 22, 2025
c1ccffe
Port DiT, AlternateVLDiT, and embodiment MLP modules
AravindhShan-nv Dec 22, 2025
6cff398
Port Eagle3 backbone and copy model config assets
AravindhShan-nv Dec 22, 2025
0f3c4bc
Create GrootN16Config with N1.6 parameters
AravindhShan-nv Dec 22, 2025
35d15b8
Port Gr00tN1d6 and Gr00tN1d6ActionHead as groot_n1d6.py
AravindhShan-nv Dec 22, 2025
aec990e
Change GrootN16 to Gr00TN1d6 for consistent naming
AravindhShan-nv Dec 22, 2025
0aaa693
Implement modeling_gr00t_n1d6.py. Fix naming groot -> gr00t
AravindhShan-nv Dec 23, 2025
7188085
Implement processor_gr00t_n1d6. Copy over core util fuctions from ori…
AravindhShan-nv Dec 23, 2025
5e42db5
Register groot-n1d6 in factory.py and configs/policies.py
AravindhShan-nv Dec 23, 2025
8d33bd2
first sign of life with training
nv-sachdevkartik Dec 28, 2025
3e960aa
corrected batch dim
nv-sachdevkartik Dec 29, 2025
8f7f96e
tested closed loop evaluation
nv-sachdevkartik Dec 29, 2025
f5fc9c8
minor fix
nv-sachdevkartik Dec 29, 2025
5aadf2b
corrected formating
nv-sachdevkartik Dec 29, 2025
c6c16ab
updated docs
nv-sachdevkartik Dec 30, 2025
fe84de1
tested dummy output match
nv-sachdevkartik Dec 30, 2025
376ddc4
tested ground truth and prediction match
nv-sachdevkartik Dec 30, 2025
bac81c1
fixed open loop eval
nv-sachdevkartik Dec 30, 2025
e3019fb
Save normalization stats during training and load them during inferen…
AravindhShan-nv Jan 6, 2026
38eed2b
Move unnormalizer step from predict_action func to the processor pipe…
AravindhShan-nv Jan 6, 2026
f234ab6
Fix embodiment_id bug - remove hardcoding.
AravindhShan-nv Jan 7, 2026
62eedfa
Do unnormalization in the open_loop_eval correctly.
AravindhShan-nv Jan 8, 2026
d37f5cc
init fix processor
yizhouzhao Jan 14, 2026
7a0d8df
dimention state img
yizhouzhao Jan 14, 2026
7c805fd
fixed training
yizhouzhao Jan 16, 2026
ea16bef
correct testing
yizhouzhao Jan 22, 2026
89118f3
uncomment ipdb
yizhouzhao Jan 22, 2026
04cd208
Adding docs for training and inference flow.
AravindhShan-nv Jan 27, 2026
4acc9a6
New open loop eval script
AravindhShan-nv Jan 28, 2026
ee894b4
Fixes to OpenLoop eval script
AravindhShan-nv Feb 1, 2026
629d674
Compute the relative_action stats correctly before training starts. V…
AravindhShan-nv Feb 2, 2026
445fb2e
Fix openloop_eval_v4. Make modality_config construction uniform.
AravindhShan-nv Feb 3, 2026
3c80d57
Pass in raw_state correctly durin inference (real_robot)
AravindhShan-nv Feb 6, 2026
1559ba4
Add debug logs for inference pipeline
AravindhShan-nv Feb 6, 2026
8dd662c
Fix select_action function bug
AravindhShan-nv Feb 6, 2026
bf588ce
Fix relative action stats to match original GR00T (per-timestep, chun…
AravindhShan-nv Feb 8, 2026
f5f788a
Fix open_loop_eval preprocessor crash from per-timestep action stats …
AravindhShan-nv Feb 9, 2026
2ddf5a6
Add Libero Panda Modality Configs
AravindhShan-nv Feb 9, 2026
30f8d2f
Fix processor_config_path loading bug
AravindhShan-nv Feb 9, 2026
ddb5348
Fix attribute lookup name (embodiment_id_mapping)
AravindhShan-nv Feb 9, 2026
f92818e
Remove Debug prints from preprocessor
AravindhShan-nv Feb 11, 2026
9fe6791
Cleanup debuging statements
AravindhShan-nv Feb 11, 2026
e86890d
Update Groot N1d6 docs with latest commands
AravindhShan-nv Feb 12, 2026
0e8a0ae
Fix default n_action_steps=8. Fix action_horizon logic
AravindhShan-nv Feb 18, 2026
2e665cd
Fix action_mask dimention mismatch
AravindhShan-nv Feb 18, 2026
4f3636b
Update hyperparams (color_jitter, grad_clip_norm, betas, decay_lr) to…
AravindhShan-nv Feb 18, 2026
403699f
Remove debug statements
AravindhShan-nv Feb 19, 2026
fc2be33
Update gr00tn1d6 docs with cmake, ffmpeg install instructions
AravindhShan-nv Feb 19, 2026
dd24c43
Implement correct action chunk execution by skipping preprocessing fo…
AravindhShan-nv Feb 20, 2026
650146e
Remove policy specific obs prep function (grootn16)
AravindhShan-nv Mar 1, 2026
83a7797
Refactor __call__ in processor groot
AravindhShan-nv Mar 1, 2026
e21fe81
Move eagle config files to separate repo. Implement config file down…
AravindhShan-nv Mar 4, 2026
3e5fae6
Remove unused code & docs
AravindhShan-nv Mar 4, 2026
002ed97
Remove unused code, clean up code
AravindhShan-nv Mar 4, 2026
d5c5ec5
Update ReadMe
AravindhShan-nv Mar 5, 2026
95c0be6
Update groot n16 readme
AravindhShan-nv Mar 5, 2026
894b167
Fix tests, remove open loop scripts.
AravindhShan-nv Mar 5, 2026
448f689
Run pre commit lint check
AravindhShan-nv Mar 5, 2026
587a60e
More linter fixes
AravindhShan-nv Mar 5, 2026
435cbd7
feat : Add gr00t N1.6 policy to LeRobot
AravindhShan-nv Mar 5, 2026
b30f168
Merge branch 'huggingface:main' into dev-groot-n16
AravindhShan-nv Mar 6, 2026
4d119ed
Revert "Merge branch 'huggingface:main' into dev-groot-n16"
AravindhShan-nv Mar 12, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 1 addition & 8 deletions .github/workflows/fast_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ permissions:
# Sets up the environment variables
env:
UV_VERSION: "0.8.0"
PYTHON_VERSION: "3.12"
PYTHON_VERSION: "3.10"

# Ensures that only the latest commit for a PR or branch is built, canceling older runs.
concurrency:
Expand All @@ -61,7 +61,6 @@ jobs:
MUJOCO_GL: egl
HF_HOME: /mnt/cache/.cache/huggingface
HF_LEROBOT_HOME: /mnt/cache/.cache/huggingface/lerobot
HF_USER_TOKEN: ${{ secrets.LEROBOT_HF_USER }}
steps:
- uses: actions/checkout@v6
with:
Expand Down Expand Up @@ -90,11 +89,5 @@ jobs:
- name: Install lerobot with test extras
run: uv sync --extra "test"

- name: Login to Hugging Face
if: env.HF_USER_TOKEN != ''
run: |
uv run hf auth login --token "$HF_USER_TOKEN" --add-to-git-credential
uv run hf auth whoami

- name: Run pytest
run: uv run pytest tests -vv --maxfail=10
17 changes: 2 additions & 15 deletions .github/workflows/full_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ permissions:
# Sets up the environment variables
env:
UV_VERSION: "0.8.0"
PYTHON_VERSION: "3.12"
PYTHON_VERSION: "3.10"
DOCKER_IMAGE_NAME: huggingface/lerobot-gpu

# Ensures that only the latest action is built, canceling older runs.
Expand All @@ -60,7 +60,6 @@ jobs:
MUJOCO_GL: egl
HF_HOME: /mnt/cache/.cache/huggingface
HF_LEROBOT_HOME: /mnt/cache/.cache/huggingface/lerobot
HF_USER_TOKEN: ${{ secrets.LEROBOT_HF_USER }}
steps:
- uses: actions/checkout@v6
with:
Expand Down Expand Up @@ -88,12 +87,6 @@ jobs:
- name: Install lerobot with all extras
run: uv sync --extra all # TODO(Steven): Make flash-attn optional

- name: Login to Hugging Face
if: env.HF_USER_TOKEN != ''
run: |
uv run hf auth login --token "$HF_USER_TOKEN" --add-to-git-credential
uv run hf auth whoami

- name: Run pytest (all extras)
run: uv run pytest tests -vv --maxfail=10

Expand Down Expand Up @@ -169,7 +162,6 @@ jobs:
HF_LEROBOT_HOME: /home/user_lerobot/.cache/huggingface/lerobot
TORCH_HOME: /home/user_lerobot/.cache/torch
TRITON_CACHE_DIR: /home/user_lerobot/.cache/triton
HF_USER_TOKEN: ${{ secrets.LEROBOT_HF_USER }}
container:
image: ${{ needs.build-and-push-docker.outputs.image_tag }} # zizmor: ignore[unpinned-images]
options: --gpus all --shm-size "16gb"
Expand All @@ -181,13 +173,8 @@ jobs:
shell: bash
working-directory: /lerobot
steps:
- name: Login to Hugging Face
if: env.HF_USER_TOKEN != ''
run: |
hf auth login --token "$HF_USER_TOKEN" --add-to-git-credential
hf auth whoami
- name: Fix ptxas permissions
run: chmod +x /lerobot/.venv/lib/python3.12/site-packages/triton/backends/nvidia/bin/ptxas
run: chmod +x /lerobot/.venv/lib/python3.10/site-packages/triton/backends/nvidia/bin/ptxas
- name: Run pytest on GPU
run: pytest tests -vv --maxfail=10
- name: Run end-to-end tests
Expand Down
24 changes: 4 additions & 20 deletions .github/workflows/nightly.yml
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ on:
# Sets up the environment variables
env:
UV_VERSION: "0.8.0"
PYTHON_VERSION: "3.12"
PYTHON_VERSION: "3.10"
DOCKER_IMAGE_NAME_CPU: huggingface/lerobot-cpu:latest
DOCKER_IMAGE_NAME_GPU: huggingface/lerobot-gpu:latest

Expand Down Expand Up @@ -119,7 +119,6 @@ jobs:
HF_LEROBOT_HOME: /home/user_lerobot/.cache/huggingface/lerobot
TORCH_HOME: /home/user_lerobot/.cache/torch
TRITON_CACHE_DIR: /home/user_lerobot/.cache/triton
HF_USER_TOKEN: ${{ secrets.LEROBOT_HF_USER }}
container:
image: ${{ needs.build-docker-cpu-nightly.outputs.image_tag }} # zizmor: ignore[unpinned-images]
options: --shm-size "16gb"
Expand All @@ -131,11 +130,6 @@ jobs:
shell: bash
working-directory: /lerobot
steps:
- name: Login to Hugging Face
if: env.HF_USER_TOKEN != ''
run: |
hf auth login --token "$HF_USER_TOKEN" --add-to-git-credential
hf auth whoami
- name: Run pytest on CPU
run: pytest tests -vv --maxfail=10
- name: Run end-to-end tests
Expand All @@ -152,7 +146,6 @@ jobs:
HF_LEROBOT_HOME: /home/user_lerobot/.cache/huggingface/lerobot
TORCH_HOME: /home/user_lerobot/.cache/torch
TRITON_CACHE_DIR: /home/user_lerobot/.cache/triton
HF_USER_TOKEN: ${{ secrets.LEROBOT_HF_USER }}
container:
image: ${{ needs.build-docker-gpu-nightly.outputs.image_tag }} # zizmor: ignore[unpinned-images]
options: --gpus all --shm-size "16gb"
Expand All @@ -164,11 +157,6 @@ jobs:
shell: bash
working-directory: /lerobot
steps:
- name: Login to Hugging Face
if: env.HF_USER_TOKEN != ''
run: |
hf auth login --token "$HF_USER_TOKEN" --add-to-git-credential
hf auth whoami
- name: Run pytest on GPU
run: pytest tests -vv --maxfail=10
- name: Run end-to-end tests
Expand All @@ -186,7 +174,6 @@ jobs:
TORCH_HOME: /home/user_lerobot/.cache/torch
TRITON_CACHE_DIR: /home/user_lerobot/.cache/triton
CUDA_VISIBLE_DEVICES: "0,1,2,3"
HF_USER_TOKEN: ${{ secrets.LEROBOT_HF_USER }}
container:
image: ${{ needs.build-docker-gpu-nightly.outputs.image_tag }} # zizmor: ignore[unpinned-images]
options: --gpus all --shm-size "16gb"
Expand All @@ -198,15 +185,12 @@ jobs:
shell: bash
working-directory: /lerobot
steps:
- name: Login to Hugging Face
if: env.HF_USER_TOKEN != ''
run: |
hf auth login --token "$HF_USER_TOKEN" --add-to-git-credential
hf auth whoami
- name: Verify GPU availability
run: |
nvidia-smi
python -c "import torch; print(f'PyTorch CUDA available: {torch.cuda.is_available()}'); print(f'Number of GPUs: {torch.cuda.device_count()}')"

- name: Run multi-GPU training tests
run: pytest -vv tests/training/
# TODO(Steven): Investigate why motors tests are failing in multi-GPU setup
run: pytest tests -vv --maxfail=10 --ignore=tests/motors/
timeout-minutes: 10
2 changes: 1 addition & 1 deletion .github/workflows/quality.yml
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v6
with:
python-version: '3.12'
python-version: '3.10'

- name: Run pre-commit hooks
uses: pre-commit/action@v3.0.1 # zizmor: ignore[unpinned-uses]
Expand Down
12 changes: 10 additions & 2 deletions .github/workflows/release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ on:
# Sets up the environment variables
env:
UV_VERSION: "0.8.0"
PYTHON_VERSION: "3.12"
PYTHON_VERSION: "3.10"

jobs:
# This job builds the Python package and publishes it to PyPI
Expand All @@ -45,7 +45,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v6
with:
python-version: '3.12'
python-version: '3.10'

- name: Extract Version
id: extract_info
Expand Down Expand Up @@ -83,6 +83,14 @@ jobs:
exit 1
fi

- name: Remove Tags with Git dependencies
# TODO(Steven): Temporary patch to remove pi from PyPi 0.4.0 release due to its reliance on git dependencies.
run: |
echo "::info:: Checking for Git dependencies to remove from pyproject.toml..."
grep -E '@ git\+https|lerobot\[pi\]' pyproject.toml | sed 's/^/::warning:: Removing line: /' || true
sed -E -i '/@ git\+https|lerobot\[pi\]/d' pyproject.toml
echo "::info:: Git dependencies removed. Proceeding with build."

- name: Install build dependencies
run: python -m pip install build

Expand Down
15 changes: 2 additions & 13 deletions .github/workflows/unbound_deps_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ permissions:
# Sets up the environment variables
env:
UV_VERSION: "0.8.0"
PYTHON_VERSION: "3.12"
PYTHON_VERSION: "3.10"
DOCKER_IMAGE_NAME: huggingface/lerobot-gpu:unbound

# Ensures that only the latest action is built, canceling older runs.
Expand All @@ -48,7 +48,6 @@ jobs:
MUJOCO_GL: egl
HF_HOME: /mnt/cache/.cache/huggingface
HF_LEROBOT_HOME: /mnt/cache/.cache/huggingface/lerobot
HF_USER_TOKEN: ${{ secrets.LEROBOT_HF_USER }}
steps:
- uses: actions/checkout@v6
with:
Expand Down Expand Up @@ -80,11 +79,7 @@ jobs:

- name: Install lerobot with all extras
run: uv sync --extra all # TODO(Steven): Make flash-attn optional
- name: Login to Hugging Face
if: env.HF_USER_TOKEN != ''
run: |
uv run hf auth login --token "$HF_USER_TOKEN" --add-to-git-credential
uv run hf auth whoami

- name: Run pytest (all extras)
run: uv run pytest tests -vv

Expand Down Expand Up @@ -142,7 +137,6 @@ jobs:
HF_LEROBOT_HOME: /home/user_lerobot/.cache/huggingface/lerobot
TORCH_HOME: /home/user_lerobot/.cache/torch
TRITON_CACHE_DIR: /home/user_lerobot/.cache/triton
HF_USER_TOKEN: ${{ secrets.LEROBOT_HF_USER }}
container:
image: ${{ needs.build-and-push-docker.outputs.image_tag }} # zizmor: ignore[unpinned-images]
options: --gpus all --shm-size "16gb"
Expand All @@ -154,11 +148,6 @@ jobs:
shell: bash
working-directory: /lerobot
steps:
- name: Login to Hugging Face
if: env.HF_USER_TOKEN != ''
run: |
hf auth login --token "$HF_USER_TOKEN" --add-to-git-credential
hf auth whoami
- name: Run pytest on GPU
run: pytest tests -vv
- name: Run end-to-end tests
Expand Down
4 changes: 2 additions & 2 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
# limitations under the License.

default_language_version:
python: python3.12
python: python3.10

exclude: "tests/artifacts/.*\\.safetensors$"

Expand Down Expand Up @@ -55,7 +55,7 @@ repos:
rev: v3.21.0
hooks:
- id: pyupgrade
args: [--py312-plus]
args: [--py310-plus]

##### Markdown Quality #####
- repo: https://github.com/rbubley/mirrors-prettier
Expand Down
2 changes: 1 addition & 1 deletion docker/Dockerfile.internal
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ ARG OS_VERSION=22.04
FROM nvidia/cuda:${CUDA_VERSION}-base-ubuntu${OS_VERSION}

# Define Python version argument
ARG PYTHON_VERSION=3.12
ARG PYTHON_VERSION=3.10

# Configure environment variables
ENV DEBIAN_FRONTEND=noninteractive \
Expand Down
2 changes: 1 addition & 1 deletion docker/Dockerfile.user
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
# docker run -it --rm lerobot-user

# Configure the base image
ARG PYTHON_VERSION=3.12
ARG PYTHON_VERSION=3.10
FROM python:${PYTHON_VERSION}-slim

# Configure environment variables
Expand Down
8 changes: 4 additions & 4 deletions docs/source/bring_your_own_policies.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ version = "0.1.0"
dependencies = [
# your policy-specific dependencies
]
requires-python = ">= 3.12"
requires-python = ">= 3.11"

[build-system]
build-backend = # your-build-backend
Expand Down Expand Up @@ -82,7 +82,7 @@ Create your policy implementation by inheriting from LeRobot's base `PreTrainedP
# modeling_my_custom_policy.py
import torch
import torch.nn as nn
from typing import Any
from typing import Dict, Any

from lerobot.policies.pretrained import PreTrainedPolicy
from .configuration_my_custom_policy import MyCustomPolicyConfig
Expand All @@ -91,7 +91,7 @@ class MyCustomPolicy(PreTrainedPolicy):
config_class = MyCustomPolicyConfig
name = "my_custom_policy"

def __init__(self, config: MyCustomPolicyConfig, dataset_stats: dict[str, Any] = None):
def __init__(self, config: MyCustomPolicyConfig, dataset_stats: Dict[str, Any] = None):
super().__init__(config, dataset_stats)
...
```
Expand All @@ -102,7 +102,7 @@ Create processor functions:

```python
# processor_my_custom_policy.py
from typing import Any
from typing import Dict, Any
import torch


Expand Down
2 changes: 1 addition & 1 deletion docs/source/earthrover_mini_plus.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ The EarthRover Mini Plus is a fully open source mobile robot that connects throu
### Hardware

- EarthRover Mini robot
- Computer with Python 3.12 or newer
- Computer with Python 3.10 or newer
- Internet connection

### Setting Up the Frodobots SDK
Expand Down
Loading