Skip to content

Imitation learning benchmark focusing on complex locomotion tasks using MuJoCo.

License

Notifications You must be signed in to change notification settings

kscalelabs/loco-mujoco

 
 

Repository files navigation

continous integration Documentation Status License: MIT Join our Discord

🚀 Latest News: A major release (v1.0) just dropped! 🎉
LocoMuJoCo now supports MJX and comes with new Jax algorithms. We also added many new environments and +22k datasets! 🚀

LocoMuJoCo is an imitation learning benchmark specifically designed for whole-body control.
It features a diverse set of environments, including quadrupeds, humanoids, and (musculo-)skeletal human models, each provided with comprehensive datasets (over 22,000 samples per humanoid).

Although primarily focused on imitation learning, LocoMuJoCo also supports custom reward function classes,
making it suitable for pure reinforcement learning as well.

Key Advantages

✅ Supports MuJoCo (single environment) and MJX (parallel environments)
✅ Includes 12 humanoid and 4 quadruped environments, featuring 4 biomechanical human models
✅ Clean single-file JAX algorithms for quick benchmarking (PPO, GAIL, AMP, DeepMimic)
✅ Combined training and environment into one JIT‑compiled function for lightning‑fast training 🚀
Over 22,000 motion capture datasets (AMASS, LAFAN1, native LocoMuJoCo) retargeted for each humanoid
Robot-to-robot retargeting allows to retarget any existing dataset from one robot to another
✅ Powerful trajectory comparison metrics including dynamic time warping and discrete Fréchet distance, all in JAX
✅ Interface for Gymnasium
✅ Built-in domain and terrain randomization
✅ Modular design: define, swap, and reuse components like observation types, reward functions, terminal state handlers, and domain randomization
Documentation


Installation

Clone this repo and do an editable installation:

cd loco-mujoco
pip install -e . 

By default, both will install the CPU-version of Jax. If you want to use Jax on the GPU, you need to install the following:

pip install jax["cuda12"]

Note

If you want to run the MyoSkeleton environment, you need to additionally run loco-mujoco-myomodel-init to accept the license and download the model.

Datasets

LocoMuJoCo provides three sources of motion capture (mocap) data for humanoid environments: default (provided by us), LAFAN1, and AMASS. The first two datasets are available on the LocoMujoCo HuggingFace dataset repository and will downloaded and cached automatically for you. AMASS needs to be downloaded and installed separately due to their licensing. See here for more information about the installation.

This is how you can visualize the datasets:

from loco_mujoco.task_factories import ImitationFactory, LAFAN1DatasetConf, DefaultDatasetConf, AMASSDatasetConf


# # example --> you can add as many datasets as you want in the lists!
env = ImitationFactory.make("UnitreeH1",
                            default_dataset_conf=DefaultDatasetConf(["squat"]),
                            lafan1_dataset_conf=LAFAN1DatasetConf(["dance2_subject4", "walk1_subject1"]),
                            # if SMPL and AMASS are installed, you can use the following:
                            #amass_dataset_conf=AMASSDatasetConf(["DanceDB/DanceDB/20120911_TheodorosSourmelis/Capoeira_Theodoros_v2_C3D_poses"])
                            )

env.play_trajectory(n_episodes=3, n_steps_per_episode=500, render=True)

Speeding up Dataset Loading

LocoMuJoCo only stores datasets with joint positions and velocities to save memory. All other attributes are calculated using forward kinematics upon loading. If you want to speed up the dataset loading, you can define caches for the datasets. This will store the forward kinematics results in a cache file, which will be loaded on the next run:

loco-mujoco-set-all-caches --path <path to cache>

For instance, you could run:

loco-mujoco-set-all-caches --path "$HOME/.loco-mujoco-caches"

Environments

You want a quick overview of all environments available? You can find it here and more detailed in the Documentation.

And stay tuned! There are many more to come ...


Tutorials

We provide a set of tutorials to help you get started with LocoMuJoCo. You can find them in the tutorials folder or with more explanation in the documentation.

If you want to check out training examples of a PPO, GAIL, AMP, or DeepMimic agent, you can find them in the training examples folder. For instance, here is an example of a DeepMimic agent you can train to achieve a human-like walking in all directions, which was trained in 36 min on an RTX 3080 Ti:


Citation

@inproceedings{alhafez2023b,
title={LocoMuJoCo: A Comprehensive Imitation Learning Benchmark for Locomotion},
author={Firas Al-Hafez and Guoping Zhao and Jan Peters and Davide Tateo},
booktitle={6th Robot Learning Workshop, NeurIPS},
year={2023}
}

About

Imitation learning benchmark focusing on complex locomotion tasks using MuJoCo.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.9%
  • Other 0.1%