|
1 | 1 | # deep_core |
2 | 2 |
|
3 | | -Core package for the Deep ROS inference framework providing abstract interfaces, tensor operations, and plugin architecture for machine learning inference in ROS 2. |
| 3 | +Core abstractions for ML inference in ROS 2 lifecycle nodes. |
4 | 4 |
|
5 | 5 | ## Overview |
6 | 6 |
|
7 | | -`deep_core` provides the foundational components for a modular, high-performance ML inference system: |
8 | | - |
9 | | -- **Plugin interfaces** for backend inference engines and memory allocators |
10 | | -- **Lifecycle node base class** with optional bond timers |
11 | | -- **Generic TensorPtr Type** to interface large tensor data between ROS msgs and backend hardware accelerators without deepcopy operations |
12 | | - |
13 | | -## Architecture |
14 | | - |
15 | | -### Core Components |
16 | | - |
17 | | -- **`TensorPtr`**: Smart pointer for multi-dimensional tensor data with pluggable memory allocators |
18 | | -- **`DeepNodeBase`**: ROS 2 lifecycle node base class for inference services |
19 | | -- **Plugin Interfaces**: Abstract base classes for backend implementations |
20 | | - - `BackendMemoryAllocator`: Custom memory allocation strategies |
21 | | - - `BackendInferenceExecutor`: ML framework inference execution |
22 | | - - `DeepBackendPlugin`: Combined backend plugin interface |
23 | | - |
24 | | -### Memory Management |
25 | | - |
26 | | -The tensor system supports custom memory allocators for optimal performance: |
27 | | - |
28 | | -```cpp |
29 | | -// Create tensor with custom allocator |
30 | | -auto allocator = get_custom_allocator(); |
31 | | -deep_ros::TensorPtr input({1, 3, 224, 224}, deep_ros::DataType::FLOAT32, allocator); |
32 | | -``` |
33 | | -
|
34 | | -### Plugin Architecture |
35 | | -
|
36 | | -Backend implementations are loaded dynamically using ROS 2 pluginlib: |
37 | | -
|
38 | | -```cpp |
39 | | -// Load backend plugin |
40 | | -if (!load_plugin("onnxruntime_cpu")) { |
41 | | - RCLCPP_ERROR(get_logger(), "Failed to load backend plugin"); |
42 | | -} |
43 | | -
|
44 | | -// Run inference |
45 | | -deep_ros::TensorPtr output = run_inference(input_tensor); |
46 | | -``` |
47 | | - |
48 | | -## Usage |
49 | | - |
50 | | -### Creating an Inference Node |
51 | | - |
52 | | -```cpp |
53 | | -#include <deep_core/deep_node_base.hpp> |
54 | | - |
55 | | -class MyInferenceNode : public deep_ros::DeepNodeBase |
56 | | -{ |
57 | | -public: |
58 | | - MyInferenceNode(const rclcpp::NodeOptions & options) |
59 | | - : DeepNodeBase("my_inference_node", options) |
60 | | - { |
61 | | - } |
62 | | - |
63 | | -protected: |
64 | | - CallbackReturn on_configure_impl(const rclcpp_lifecycle::State & state) override |
65 | | - { |
66 | | - // Custom configuration logic |
67 | | - return CallbackReturn::SUCCESS; |
68 | | - } |
69 | | - |
70 | | - CallbackReturn on_activate_impl(const rclcpp_lifecycle::State & state) override |
71 | | - { |
72 | | - // Start inference services |
73 | | - return CallbackReturn::SUCCESS; |
74 | | - } |
75 | | -}; |
76 | | -``` |
77 | | -
|
78 | | -### Custom Memory Allocator |
79 | | -
|
80 | | -```cpp |
81 | | -class MyCustomAllocator : public deep_ros::BackendMemoryAllocator |
82 | | -{ |
83 | | -public: |
84 | | - void * allocate(size_t bytes) override { |
85 | | - // Custom allocation strategy (e.g., GPU memory, aligned allocation) |
86 | | - return my_custom_malloc(bytes); |
87 | | - } |
88 | | -
|
89 | | - void deallocate(void * ptr) override { |
90 | | - my_custom_free(ptr); |
91 | | - } |
92 | | -
|
93 | | - // Implement other required methods... |
94 | | -}; |
95 | | -``` |
96 | | - |
97 | | -## Package Structure |
98 | | - |
99 | | -``` |
100 | | -deep_core/ |
101 | | -├── include/deep_core/ |
102 | | -│ ├── deep_node_base.hpp # Lifecycle node base class |
103 | | -│ ├── types/ |
104 | | -│ │ ├── tensor.hpp # TensorPtr class and data types |
105 | | -│ │ └── data_type.hpp # Enum for tensor data types |
106 | | -│ └── plugin_interfaces/ |
107 | | -│ ├── backend_memory_allocator.hpp # Memory allocator interface |
108 | | -│ ├── backend_inference_executor.hpp # Inference executor interface |
109 | | -│ └── deep_backend_plugin.hpp # Combined plugin interface |
110 | | -├── src/ |
111 | | -│ ├── deep_node_base.cpp # Lifecycle node implementation |
112 | | -│ └── tensor.cpp # TensorPtr operations |
113 | | -└── CMakeLists.txt |
114 | | -``` |
115 | | - |
116 | | -## Dependencies |
117 | | - |
118 | | -- **ROS 2**: rclcpp, rclcpp_lifecycle |
119 | | -- **pluginlib**: Dynamic plugin loading |
120 | | -- **Standard C++17**: Modern C++ features |
121 | | - |
122 | | -## Supported Data Types |
123 | | - |
124 | | -- `FLOAT32`: 32-bit floating point |
125 | | -- `INT32`: 32-bit signed integer |
126 | | -- `INT64`: 64-bit signed integer |
127 | | -- `UINT8`: 8-bit unsigned integer |
128 | | - |
129 | | -## Backend Plugin Development |
130 | | - |
131 | | -To create a new backend plugin: |
132 | | - |
133 | | -1. Implement the three interfaces: |
134 | | - - `BackendMemoryAllocator` |
135 | | - - `BackendInferenceExecutor` |
136 | | - - `DeepBackendPlugin` |
137 | | - |
138 | | -2. Create a `plugins.xml` file: |
139 | | - |
140 | | -```xml |
141 | | -<library path="my_backend_plugin_lib"> |
142 | | - <class name="my_backend" type="my_namespace::MyBackendPlugin" base_class_type="deep_ros::DeepBackendPlugin"> |
143 | | - <description>My custom ML backend</description> |
144 | | - </class> |
145 | | -</library> |
146 | | -``` |
147 | | - |
148 | | -1. Export the plugin in your `package.xml`: |
149 | | - |
150 | | -```xml |
151 | | -<export> |
152 | | - <deep_ort_backend_plugin plugin="${prefix}/plugins.xml" /> |
153 | | -</export> |
154 | | -``` |
155 | | - |
156 | | -## Examples |
157 | | - |
158 | | -See the [`deep_ort_backend_plugin`](../deep_ort_backend_plugin/) package for a complete ONNX Runtime backend implementation. |
159 | | - |
160 | | -## License |
161 | | - |
162 | | -Licensed under the Apache License, Version 2.0. |
| 7 | +Provides: |
| 8 | +- `TensorPtr`: Smart pointer for tensor data with custom memory allocators |
| 9 | +- `DeepNodeBase`: Lifecycle node base class with plugin loading and optional bond support |
| 10 | +- Plugin interfaces for backend inference engines and memory management |
| 11 | + |
| 12 | +## Key Components |
| 13 | + |
| 14 | +### TensorPtr |
| 15 | +Multi-dimensional tensor smart pointer supporting: |
| 16 | +- Custom memory allocators (CPU/GPU/aligned memory) |
| 17 | +- View semantics (wrap existing data without copying) |
| 18 | +- Standard tensor operations (reshape, data access) |
| 19 | + |
| 20 | +### DeepNodeBase |
| 21 | +Lifecycle node that handles: |
| 22 | +- Dynamic backend plugin loading via pluginlib |
| 23 | +- Model loading/unloading lifecycle |
| 24 | +- Optional bond connections for nav2 integration |
| 25 | +- Parameter-driven configuration |
| 26 | + |
| 27 | +### Plugin Interfaces |
| 28 | +Deep_ROS abstracts away hardware acceleration interfaces as plugins. This means that users have the |
| 29 | +freedom to switch between different hardware accelerators at runtime. The backend plugin interface is |
| 30 | +as follows: |
| 31 | +- `DeepBackendPlugin`: Abstract interface for defining a backend plugin. Must implement: |
| 32 | + - `BackendMemoryAllocator`: Backend implementation for memory allocation and management |
| 33 | + - `BackendInferenceExecutor`: Backend implementation for running model inference |
| 34 | + |
| 35 | +## Configuration |
| 36 | + |
| 37 | +All nodes inherenting `deep_ros::DeepNodeBase` have the following settable parameters. |
| 38 | + |
| 39 | +Required parameters: |
| 40 | +- `Backend.plugin`: Plugin name (e.g., "onnxruntime_cpu") |
| 41 | +- `model_path`: Path to model file |
| 42 | + |
| 43 | +Optional parameters: |
| 44 | +- `Bond.enable`: Enable bond connections (default: false) |
| 45 | +- `Bond.bond_timeout`: Bond timeout in seconds (default: 4.0) |
| 46 | +- `Bond.bond_heartbeat_period`: Heartbeat period in seconds (default: 0.1) |
0 commit comments