[Question] Question Regarding Sim2Real Transfer for IsaacLab UR3 Reach Task #2343
Replies: 3 comments
-
Thanks for posting this question. This is an excellent topic for our Discussions section. I will move this post to a discussion for the team to follow up. Thank you for your interest in Isaac Lab. |
Beta Was this translation helpful? Give feedback.
-
Hey @Holiclife-KTH, I recently created a repository that might be helpful for you: https://github.com/louislelay/isaaclab_ur_reach_sim2real. In it, I deployed a policy using ROS 2. Unfortunately, I wasn't able to test it on a real UR10, but it worked well in URSim. Hopefully, it can give you some useful ideas! I'm also planning to release a similar repo for the Kinova Gen3 arm in the next few days, I'll share the link here once it's ready. For that one, I did test it on a real robot. Just a heads-up: for both projects, I focused only on reach tasks, but I think they could still be good references for your setup. |
Beta Was this translation helpful? Give feedback.
-
Following up, the team has the following suggestions. First, assuming that you are:
Then, it is likely that the drives/controller that you are using in sim is far less stiff than UR’s default controller in the real world. A few possible things to explore would be:
Second, a good starting point for addressing the issues you mention would be to perform some quick visualization in simulation to assess whether the behavior also occurs there-- for example, by visualizing the policy outputs and the robot joint trajectory during a trained policy deployment in simulation. If the issue exists in simulation, it indicates that the RL task (reward function) does not adequately capture the jerkiness requirements. If there is no issue in simulation, then it is a sim-to-real gap, and you can follow (2) and (3) above. |
Beta Was this translation helpful? Give feedback.
-
I modified the IsaacLab reach task by replacing the UR10 with a UR3 and successfully trained a policy using RSL-RL.
After training, I attempted to deploy the resulting policy.onnx file on the real robot. However, the output of the policy is in the form of JointPositionAction, and when I apply it directly to the robot via the driver, the movement is either too jerky or not smooth at all.
Has anyone here successfully performed Sim2Real transfer in a similar setup? I would really appreciate any tips or insights regarding smoothing the control or improving the trajectory execution on the real hardware.
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions