[Question] Using Issac ROS repo ros2 topics in IssacLab #2317
Replies: 2 comments
-
Thank you for posting this. It is not clear what exactly is what you are trying to train. It seems detectnet is just an intermediate process that would give you a form of ground-truth. Could you elaborate and clarify what is your training task? I will move this post to our discussions for the team to follow up. |
Beta Was this translation helpful? Give feedback.
-
Hi, let me elaborate. I want to use an object detection algo like DetectNet to get inference. I want to feed that inference data as an observation in my IssacLab environment. My training task goal is that the robot should move around to increase the confidence of the detected object or detect more objects in the environment. So far, by code is working fine for a single environment. However, it fails in multiple envs. What I did was I made a custom function in class Observation Cfg: To make this work, I created a class called DetectNetDetections which subscribes to the /detectnet/detections topic and stores the detection data in self.detections_data:
Then I customized the train.py file for sb3 adding the ROS2 initialization and creating a unwrapped_env.detectnet_detections_instance which contains the data from detections_data array:
Then I use my custom observation function to get the detectnet_data from detectnet_detections_instance (mainly the confidence scores) which is now being stored in unwrapped_env:
My issue is that this works ok for a single environment. I use to use the full Issac Sim instead of the bare one so that I can use the action graphs that were premade for the Issac Nova Carter ROS usd file. Hence, when I launch this in IssacLab, the topics like /front_stereo_camera/left/image_rect_color are being published. But when I run 2 envs in IssacLab as such, there are two Nova Carter robots so each should publish their own topics right? But in ros2 topic list, there is only 1 /front_stereo_camera/left/image_rect_color. And when I use the /detectnet/detections using the commands mentioned in Issac ROS (https://nvidia-isaac-ros.github.io/repositories_and_packages/isaac_ros_object_detection/isaac_ros_detectnet/index.html#quickstart), it also publishes 1 /detectnet/detections topic instead of 2 for each individual robot (even though each robot is seeing different things). So how do I fix this? |
Beta Was this translation helpful? Give feedback.
-
Question
Hello, I am trying to use Issac ROS modules in the RL training env in IssacLab for the Nova Carter robot.
I am using the isaac_ros_detectnet:
https://nvidia-isaac-ros.github.io/repositories_and_packages/isaac_ros_object_detection/isaac_ros_detectnet/index.html#quickstart
I am confused how it is supposed to be integrated when running multiple environments as when I use 4 envs for training and go to ros2 topic list, there is only one topic there instead of 4 topics. I am guessing it is not meant for muitple envs but if I was suppose to do that, how can I achieve it?
Basically, I am trying to use the action graphs provided with the nova carter robot and use the isaac_ros_detectnet to get the inference and use it as an observation for training.
This is my cfg file:
`
Scene cfg
@configclass
class WheeledSceneCfg(InteractiveSceneCfg):
"""Configuration for a Wheeled robot scene."""
import dependencies for ros2
import rclpy
from rclpy.node import Node
from geometry_msgs.msg import Twist
import random
import sys
sys.path.append(os.path.join(os.environ["HOME"], "niris-isaaclab/IsaacLab/source/isaaclab_tasks/isaaclab_tasks/manager_based/classic/nova_carter_conf/"))
import mdp
Actions cfg
@configclass
class ActionsCfg:
"""Action specifications for the environment."""
joint_velocities = mdp.JointVelocityActionCfg(asset_name="robot", joint_names=["joint_wheel_left", "joint_wheel_right"], scale=5.0)
joint_positions = mdp.JointPositionActionCfg(asset_name="robot", joint_names=["joint_swing_left", "joint_swing_right"], scale=5.0)
Observations cfg
@configclass
class ObservationsCfg:
"""Observation specifications for the MDP."""
Event cfg
@configclass
class EventCfg:
"""Configuration for events with fixed orientation."""
reset_scene = EventTerm(
func=mdp.reset_root_state_uniform,
params={
"pose_range": {
"x": (-2.0, 2.0), # Random x position
"y": (-2.0, 2.0), # Random y position
"z": (0.05, 0.05), # Fixed z at 0.05
"roll": (0.0, 0.0), # No rotation
"pitch": (0.0, 0.0), # No rotation
"yaw": (0.0, 0.0) # No rotation
},
"velocity_range":{
"x": (0.0, 0.0), # No linear velocity
"y": (0.0, 0.0),
"z": (0.0, 0.0),
"roll": (0.0, 0.0), # No angular velocity
"pitch": (0.0, 0.0),
"yaw": (0.0, 0.0)
},
"asset_cfg":SceneEntityCfg(name="robot")
},
mode="reset")
# reset_scene = EventTerm(func=mdp.reset_scene_to_default, mode="reset")
@configclass
class RewardsCfg:
"""Reward terms for the MDP."""
# (1) Constant running reward
alive = RewTerm(func=mdp.is_alive, weight=1.0)
Terminations cfg
@configclass
class TerminationsCfg:
"""Termination terms for the MDP."""
@configclass
class CommandsCfg:
"""Command terms for the MDP."""
@configclass
class CurriculumCfg:
"""Configuration for the curriculum."""
pass
@configclass
class WheeledEnvCfg(ManagerBasedRLEnvCfg):
"""Configuration for the wheeled environment."""
# Scene settings
scene: WheeledSceneCfg = WheeledSceneCfg(num_envs=4096, env_spacing=40.0)
# Basic settings
observations: ObservationsCfg = ObservationsCfg()
actions: ActionsCfg = ActionsCfg()
events: EventCfg = EventCfg()
# MDP settings
curriculum: CurriculumCfg = CurriculumCfg()
rewards: RewardsCfg = RewardsCfg()
terminations: TerminationsCfg = TerminationsCfg()
# No command generator
commands: CommandsCfg = CommandsCfg()
`
and this is my custom nova_carter_ros2.py file in mdp folder:
`
class DetectNetDetections:
def init(self, node, device='cuda:0'):
self.node = node
self.detections_data = []
self.device = device
self.lock = Lock() # Add lock for thread safety
self.class_id_mapping = {"person": 0, "car": 1, "bicycle": 2}
self.detections_sub = self.node.create_subscription(
Detection2DArray, "/detectnet/detections", self._detections_callback, 10
)
def get_confidence(env: ManagerBasedEnv, asset_cfg: SceneEntityCfg = SceneEntityCfg("robot")) -> torch.Tensor:
asset: Articulation = env.scene[asset_cfg.name]
num_envs = asset.data.root_state_w.shape[0]
as you can see, I create env.detectnet_detections_instance when i run my custom training file:
# create isaac environment
env = gym.make(args_cli.task, cfg=env_cfg, render_mode="rgb_array" if args_cli.video else None)
`
But I think it is just giving me the last spawned or first spawned envs inference data and not each envs individual data. How to fix that?
Beta Was this translation helpful? Give feedback.
All reactions