-
Notifications
You must be signed in to change notification settings - Fork 761
Description
Hi, I ran into a relatively un-intuitive QoS issue with the RectifyNode.
Due to the lazy subscription and dynamic QoS reconfiguration according to subscription topics at run-time (I think), the Rectification node works slightly differently depending on whether the downstream consumers or the upstream data providers are started first:
In my first test, I run:
- ros2 run image_proc rectify_node --help --ros-args --remap image:=/stereo/left/image_raw --remap image_rect:=/stereo/left/image_rect
- My in_vivo_data_loader_node camera node which publishes to /stereo/left/image_raw
- ros2 topic hz /stereo/left/image_rect
Then everything works fine and I get synchronized input topics. In this case, $ ros2 topic info /stereo/left/image_raw --verbose outputs Reliability: RELIABLE for the in_vivo_data_loader_node publisher and the RectifyNode's subscriber.
However, when I switch the startup order to:
- ros2 run image_proc rectify_node --help --ros-args --remap image:=/stereo/left/image_raw --remap image_rect:=/stereo/left/image_rect
- ros2 topic hz /stereo/left/image_rect
- My in_vivo_data_loader_node camera node which publishes to /stereo/left/image_raw
i.e. the consumer is started before the camera node, then I get the "Topics '/stereo/left/image_raw' and '/stereo/left/camera_info' do not appear to be synchronized." warnings and no usable output stream due to almost no matches and the QoS no longer matches up:
$ ros2 topic info /stereo/left/image_raw --verbose
Type: sensor_msgs/msg/Image
Publisher count: 1
Node name: in_vivo_data_loader_node
Node namespace: /
Topic type: sensor_msgs/msg/Image
Topic type hash: RIHS01_d31d41a9a4c4bc8eae9be757b0beed306564f7526c88ea6a4588fb9582527d47
Endpoint type: PUBLISHER
GID: 01.0f.bc.f9.e6.a5.8e.7c.00.00.00.00.00.00.13.03
QoS profile:
Reliability: RELIABLE
History (Depth): UNKNOWN
Durability: VOLATILE
Lifespan: Infinite
Deadline: Infinite
Liveliness: AUTOMATIC
Liveliness lease duration: Infinite
Subscription count: 1
Node name: RectifyNode
Node namespace: /
Topic type: sensor_msgs/msg/Image
Topic type hash: RIHS01_d31d41a9a4c4bc8eae9be757b0beed306564f7526c88ea6a4588fb9582527d47
Endpoint type: SUBSCRIPTION
GID: 01.0f.bc.f9.c0.a5.91.6e.00.00.00.00.00.00.15.04
QoS profile:
Reliability: BEST_EFFORT
History (Depth): UNKNOWN
Durability: VOLATILE
Lifespan: Infinite
Deadline: Infinite
Liveliness: AUTOMATIC
Liveliness lease duration: Infinite
Because I'm running this from within a nested launch-file hierarchy, startup-order is usually not controlled. Of course I can fix this in my own launch script, but we have multiple users and enforcing it throughout the whole project and all launch files is difficult. I'd like a cleaner solution.
Is this an unintended side-effect, or correct behavior? Does someone have a recommendation on how to cleanly solve this issue?
Side notes:
- The ROS2 design doc https://design.ros2.org/articles/qos_configurability.html mentions that QoS should not be dynamically reconfigurable. It's a slightly different topic, but I feel like the rationale should apply here as well. Or at least the behavior should be well documented, but I haven't found anything.
- I'm on ROS jazzy, where the QoS overwriting doesn't seem to be implemented yet, so this is especially annoying because I have no clean control over things.
- My images are quite large, which may be the root cause of the synchronization issue in the RectifyNode.
- I only tested the RectifyNode, but assume this applies to all similar lazy subscribers