Check if issue already exists
Describe the bug
My pipeline is:
depthai_ros_driver -> image_proc::RectifyNode -> AprilTagNode
After 20-25 minutes of running the pipeline I started to observe de-synchronization issue in the apriltag_node.
Initially I get:
camera | [apriltag_node-2] [WARN 1768403352.560888186] [apriltag_node]: [image_transport] Topics '/oak/oak/rgb/image_rect' and '/oak/oak/rgb/camera_info' do not appear to be synchronized. In the last 10s:
camera | [apriltag_node-2] Image messages received: 20
camera | [apriltag_node-2] CameraInfo messages received: 20
camera | [apriltag_node-2] Synchronized pairs: 0 (checkImagesSynchronized() at ./src/camera_subscriber.cpp:85)
after a while:
camera | [apriltag_node-2] [WARN 1768403617.560908363] [apriltag_node]: [image_transport] Topics '/oak/oak/rgb/image_rect' and '/oak/oak/rgb/camera_info' do not appear to be synchronized. In the last 10s:
camera | [apriltag_node-2] Image messages received: 1
camera | [apriltag_node-2] CameraInfo messages received: 20
camera | [apriltag_node-2] Synchronized pairs: 0 (checkImagesSynchronized() at ./src/camera_subscriber.cpp:85)
I Investigated it, but it turned out that apriltag_node is not the problem (see this issue).
Then I started debugging the next element in my pipeline - RectifyNode (see this issue), but it also not seems to be an issue. I checked the "age" of the incomming messages in the rectify_node. and it turned out the system continues to publish images at the expected frequency, but the images are increasingly old (“age” grows from ~50ms to ~900–1000 ms). This breaks ExactTime-based synchronization in downstream nodes (e.g. apriltag_node) and degrades perception quality or even fail completly.
To sum up, this does not appear to be a timestamping issue or a processing performance issue, but rather buffering of image frames instead of dropping old frames.
Minimal Reproducible Example
Pipeline:
- depthai ROS driver (OAK, RGB only)
- image_proc::RectifyNode and recently switched to my custom RectifyNode (still simple image rectify + republish)
apriltag_node (CameraSubscriber + ExactTime)
Configuration:
- OAK publishes RGB images at 5 Hz
- Image publisher uses default QoS (RELIABLE + history depth) <-- can it be a problem here?
- RectifyNode execution time ~3 ms (stable)
apriltag_node uses sensor_data QoS
- System clock stable, however,
i_update_ros_base_time_on_ros_msg: true enabled
Observed behavior:
- Initial runtime:
- image age ~40–60 ms
- system works correctly
- After ~20–25 minutes:
- image age grows to ~900–950 ms in the downstream RectifyNode
- image_raw 5Hz, image_rect drops to 1Hz
- rectify execution time remains ~3 ms which is OK
- downstream nodes receive increasingly old frames
- ExactTime synchronization in downstream packages eventually breaks (image / camera_info no longer match in time)
This issue can be reproduced with the depthai ROS driver alone by monitoring:
image_raw publish rate
now() - header.stamp over time
Expected behavior
I think that image streams should behave as “always-latest” or at least can be set by the user to do so:
- Old frames should be dropped when the system cannot keep up.
- Image age should remain bounded (tens of milliseconds), even at low publish rates.
- Long-running operation should not result in unbounded latency growth.
In particular, a fixed 5 Hz configuration should not result in ~1 second image latency after some time.
Screenshots
Not applicable (behavior observed via logs and topic monitoring).
DEPTHAI Config:
/**:
ros__parameters:
camera:
i_enable_imu: false
i_enable_ir: false
i_ip: '192.168.2.126'
i_nn_type: none
i_pipeline_type: RGB
rgb:
i_resolution: '1080P'
i_fps: 5.0
i_update_ros_base_time_on_ros_msg: true
Additional context
My hypothesis: The observed behavior is consistent with RELIABLE QoS + history depth > 1 in the depthai publisher. Frames appear to be buffered and delivered later instead of being dropped, leading to growing end-to-end latency.
From the outside, the pipeline looks healthy (correct Hz), but in reality it is processing increasingly stale data.
Currently, AFAIK there is no way to configure image QoS in the driver (see this issue) to:
- switch to BEST_EFFORT
- force KEEP_LAST(1)
- enforce drop-old-frames semantics
This severely impacts long-running, real-time robotic applications that rely on ExactTime synchronization and low-latency image streams.
Any guidance on expected behavior, internal buffering, or plans to expose QoS configuration for image publishers would be greatly appreciated.
Check if issue already exists
Describe the bug
My pipeline is:
After 20-25 minutes of running the pipeline I started to observe de-synchronization issue in the
apriltag_node.Initially I get:
after a while:
I Investigated it, but it turned out that
apriltag_nodeis not the problem (see this issue).Then I started debugging the next element in my pipeline - RectifyNode (see this issue), but it also not seems to be an issue. I checked the "age" of the incomming messages in the
rectify_node. and it turned out the system continues to publish images at the expected frequency, but the images are increasingly old (“age” grows from ~50ms to ~900–1000 ms). This breaks ExactTime-based synchronization in downstream nodes (e.g. apriltag_node) and degrades perception quality or even fail completly.To sum up, this does not appear to be a timestamping issue or a processing performance issue, but rather buffering of image frames instead of dropping old frames.
Minimal Reproducible Example
Pipeline:
apriltag_node(CameraSubscriber + ExactTime)Configuration:
apriltag_nodeuses sensor_data QoSi_update_ros_base_time_on_ros_msg: trueenabledObserved behavior:
This issue can be reproduced with the depthai ROS driver alone by monitoring:
image_rawpublish ratenow() - header.stampover timeExpected behavior
I think that image streams should behave as “always-latest” or at least can be set by the user to do so:
In particular, a fixed 5 Hz configuration should not result in ~1 second image latency after some time.
Screenshots
Not applicable (behavior observed via logs and topic monitoring).
DEPTHAI Config:
Additional context
My hypothesis: The observed behavior is consistent with RELIABLE QoS + history depth > 1 in the depthai publisher. Frames appear to be buffered and delivered later instead of being dropped, leading to growing end-to-end latency.
From the outside, the pipeline looks healthy (correct Hz), but in reality it is processing increasingly stale data.
Currently, AFAIK there is no way to configure image QoS in the driver (see this issue) to:
This severely impacts long-running, real-time robotic applications that rely on ExactTime synchronization and low-latency image streams.
Any guidance on expected behavior, internal buffering, or plans to expose QoS configuration for image publishers would be greatly appreciated.