Handling transforms in Autoware - Managed Transform Buffer proposal #6299
amadeuszsz
started this conversation in
Design
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Introduction
In ROS 2 we obtain transforms via:
This might be sufficient for simple applications, but in complex robotics systems like Autoware, it might lead to performance bottlenecks. The reason is that this approach requires a
tf2_ros::TransformListenerinstance for each node that needs to perform transform lookups. Eachtf2_ros::TransformListenerinstance implicitly creates a node with a subscription to the/tfand/tf_statictopics. This brings overhead - CPU usage increases, and each additional node requires a thread.Managed Transform Buffer as it is now
The Managed Transform Buffer is a wrapper around the standard ROS 2
tf2_ros::Buffer&tf2_ros::TransformListener. Some time ago, we started using it mainly in the Sensing component.Long story short: it reduced CPU utilization for the sensing component, but when trying to scale it up to the whole Autoware system, benchmarks showed a problem. It turned out that the implemented mechanism can increase CPU utilization due to DDS discovery. Therefore, we designed a new mechanism, benchmarked again, and came here with results.
Managed Transform Buffer redesign proposal
The new design boosts performance with the same principles in mind as the initial implementation of Managed Transform Buffer:
TransformListenerinstance in a Composable Node Container.The issue related to DDS discovery is mitigated by using a Static Transform Server - an additional node which exposes a service for Managed Transform Buffer use. How does it work?
Scenario A - request for static transform:
Scenario B - request for dynamic transform:
tf2_ros::TransformListener&tf2_ros::Bufferor uses an existing one from the Composable Node Container. Next time, Managed Transform Buffer will directly calltf2_ros::Bufferfor the transform without requesting the Static Transform Server.Benchmark
In a sample scenario for Autoware:
transform_listener_impl_xxxxxxxxxxxxnodes.transform_listener_impl_xxxxxxxxxxxxnodes.No latency changes were observed. Detailed analyze available in TIER IV INTERNAL LINK.
Effect on Autoware system
The upgraded version of Managed Transform Buffer will require an extra node to run in the background - Static Transform Server. It is a simple node that exposes a service for only the inner implementation of Managed Transform Buffer to request static transforms. Users don't need to interact with the server.
This node can be added to one of the launch files:
autoware.launch.xml- always runs regardless of turned off subcomponents, but not running when the user runs a single component directly.tier4_autoware_api_component.launch.xml- risk of not running if the user setslaunch_apitofalse.tier4_*_component.launch.xml- separate server for each component, but unnecessary overhead of multipletransform_listener_impl_xxxxxxxxxxxx.Apart from the decision about Static Transform Server placement, there is a scenario when the user runs a single node by hand. Therefore, Managed Transform Buffer will have a suitable initialization mechanism:
tf_server_timeout_msROS parameter using Autoware'sglobal_params.launch.py.Summary
The Managed Transform Buffer slightly reduces CPU utilization and significantly reduces the number of running
transform_listener_impl_xxxxxxxxxxxxnodes. The proposed approach for this feature integration would not affect OSS users, because:On top of that, the Managed Transform Buffer has a more handy API which avoids boilerplate code while handling transforms.
The proposed implementation source code - https://github.com/autowarefoundation/managed_transform_buffer/tree/feat/server-based-tf-buffer.
Looking forward to your comments!
Beta Was this translation helpful? Give feedback.
All reactions