-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Description
Question
Hi everyone, I’m developing a reinforcement-learning controller for a bipedal leg in Isaac Lab (training with PPO using RSL-RL), and I’m running into an issue with how the PD actuator behaves in simulation and dont know why Policy Actions Are Too High and Don’t Match Actual Joint Positions.
I’m using DCMotor config for my actuators. Here’s a simplified snippet of my setup:
actuators={
"hip_knee": DCMotorCfg(
joint_names_expr=[".*_10_9_.*"],
stiffness={".*": 10.0},
damping={".*": 1.0},
effort_limit=36.0,
saturation_effort=53.0 * 2
velocity_limit=235.0*2*3.14159/60,
...
),
"hip_thigh": DCMotorCfg(
joint_names_expr=[".*_70_9_.*", ".*_70_10_.*"],
stiffness={".*": 4.0},
damping={".*": 0.4},
effort_limit=8.3 * 2,
saturation_effort=24.8 * 2,
velocity_limit =310.0 * 2 * 3.14159 / 60,
...
),
"ankles": DCMotorCfg(
joint_names_expr=[".*_60_6_.*"],
stiffness={".*": 8.0},
damping={".*": 0.8},
effort_limit= 6.0 * 1.4 * 2,
saturation_effort=18.0 * 1.4 * 2,
velocity_limit = 490.0 * 2 * 3.14159 / 60 ,
...
),
}
After training, when I played the policy and plotted
the policy action (target position in radians)
the actual joint position in the environment
the torque output by each joint in the environment,
the behavior is as follow.
I observe that:
torque ≈ Kp * target_position(policy)
instead of the expected PD relationship:
τ = Kp * (q_des – q) + Kd * (qdot_des – qdot) + feedforward torque
The torque graph nearly matches the policy’s target-position graph, even though the actual joint position is different, causing the policy actions to be much higher than expected.
My question:
Is this behavior expected when using DCMotorCfg with the PD torque model in Isaac Lab? Or does this indicate a misconfiguration in the actuator setup or RL training?
Build Info
- Isaac Lab Version: [2.0.2]
- Isaac Sim Version: [4.5.0-rc.36+release.19112.f59b3005.gl]
