Skip to content

Alhuuin/rl_controller

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

69 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RLController

An FSM controller that integrates reinforcement learning policies with mc_rtc for robotic control. This package provides example policies for the H1 humanoid robot and supports ONNX format for policy deployment.

Note: ONNX Runtime is bundled with this repository—no external installation required.

Architecture

The controller is organized into the following components:

Building

Dependencies

All required dependencies and their specific versions are available in this branch of the mc_rtc_superbuild if you need a quick installation setup.

The use of the ExternalForcesEstimator plugin is highly necessary, it is thus enabled by default in the controller config file.

Build commands

mkdir -p build && cd build
cmake .. -DCMAKE_BUILD_TYPE=RelWithDebInfo
make -j$(nproc)
make install

Usage

Robot and Simulator Support

The controller is optimized for the H1 humanoid robot with minimal configuration required. Support for other robots is possible with additional adaptation (see Adding a New Robot).

Policies trained in ManiSkill and IsaacLab are fully supported. For policies from other training environments, you can add custom simulator support (see Adding a New Simulator).

Policy Management

Default policies are located in the policy/ directory. The controller supports switching between multiple policies at runtime through the GUI (RLController/Policy section).

Important: Policy transitions should be compatible with the current state. For example, switching from standing to walking works because the walking policy can handle observations from a standing state, but the reverse may not be true without proper handling.

Velocity Control with Joystick

For policies that support velocity commands, control using the mc_joystick plugin is now supported. This allows real-time velocity yaw control through a game controller using the left joystick or left arrows and yaw control using the right joystick.

Configuring Policies

  • Add your policy files to the policy/ directory (ONNX format)

  • Configure policy parameters in etc/RLController.in.yaml. Each policy can specify:

    • Robot name*
    • Control mode (position/torque)*
    • QP usage (true/false)*
    • Simulator used during training*
    • Joints indices by policy*
    • PD gains ratio
    • PD gains (kp and kd)*
    • speed of the control (in m/s) when using the joystick plugin

Parameters with "*" are necessary. The others are optional.

  • Define observation vectors in src/utils.cpp (l.131). The file includes default examples for:
    • Standing policy (case 0)
    • Walking policies (cases 1-2)

Advanced Setup

Adding a New Simulator

Some simulators use different joint ordering than the URDF/mc_rtc convention. To add support:

  • Define the joint mapping in src/PolicySimulatorHandling.h by setting the mcRtcToSimuIdx_ member variable
  • If the mapping is defined in the header, the class will automatically handle unrecognized simulator or robot names

Adding a New Robot

To use the controller with a different robot, modify the following:

  • Configuration file (etc/RLController.in.yaml (l.60)) : Add your robot under the Robot category with the mc_rtc_joints_order corresponding to the joints in URDF order
  • Joint mapping (src/PolicySimulatorHandling.h): Specify the mcRtcToSimuIdx_ mapping for your robot, similar to adding a new simulator

About

Integration of RL policies with mc_rtc

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •