Skip to content

Commit 52d43d7

Browse files
committed
doc changes
1 parent c7c4bc5 commit 52d43d7

File tree

3 files changed

+109
-201
lines changed

3 files changed

+109
-201
lines changed

deep_core/DEVELOPING.md

Lines changed: 68 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,68 @@
1+
# deep_core developing doc
2+
3+
## Design Principles
4+
5+
- TensorPtr is a smart pointer, not a traditional tensor class, it points to data in memory allocated by the backend memory allocator
6+
- DeepNodeBase handles plugin loading automatically via parameters
7+
- All backends are plugins - no hard framework dependencies
8+
- Memory allocators enable zero-copy GPU integration
9+
10+
## Usage
11+
12+
### CMakeLists.txt
13+
14+
```CMakeLists.txt
15+
find_package(deep_core REQUIRED)
16+
17+
target_link_libraries(${YOUR_LIBRARY}
18+
deep_core::deep_core_lib
19+
)
20+
```
21+
22+
### Creating an Inference Node
23+
24+
**Inherit from `DeepNodeBase`** - gets automatic plugin loading and model management
25+
26+
Key lifecycle callbacks to override:
27+
- `on_configure_impl()` - Set up subscribers, publishers, services
28+
- `on_activate_impl()` - Start processing (DeepNodeBase handles plugin/model loading)
29+
- `on_deactivate_impl()` - Stop processing
30+
- `on_cleanup_impl()` - Clean up resources
31+
32+
**DeepNodeBase automatically handles:**
33+
- Loading backend plugin based on `Backend.plugin` parameter
34+
- Loading model based on `model_path` parameter
35+
- Bond connections if `Bond.enable` is true
36+
- Calling your `*_impl()` methods after base functionality
37+
38+
**Your node just needs to:**
39+
- Set up ROS interfaces (topics, services, actions)
40+
- Process incoming data using `run_inference(TensorPtr)`
41+
- Handle your specific business logic
42+
43+
Don't forget: `RCLCPP_COMPONENTS_REGISTER_NODE(your_namespace::YourNode)`
44+
45+
### Creating a Backend Plugin
46+
47+
1. **Implement three classes inheriting from:**
48+
49+
- `BackendMemoryAllocator` - Handle memory allocation/deallocation for your hardware
50+
- `BackendInferenceExecutor` - Load models and run inference in your ML framework
51+
- `DeepBackendPlugin` - Return instances of your allocator and executor
52+
53+
Key methods to implement:
54+
- Allocator: `allocate()`, `deallocate()`, `allocator_type()`
55+
- Executor: `load_model()`, `run_inference()`, `unload_model()`, `supported_model_formats()`
56+
- Plugin: `backend_name()`, `get_allocator()`, `get_inference_executor()`
57+
58+
Don't forget: `PLUGINLIB_EXPORT_CLASS(YourPlugin, deep_ros::DeepBackendPlugin)`
59+
60+
1. **Create `plugins.xml`:**
61+
62+
```xml
63+
<library path="my_backend_lib">
64+
<class name="my_backend" type="MyBackendPlugin" base_class_type="deep_ros::DeepBackendPlugin">
65+
<description>My custom backend</description>
66+
</class>
67+
</library>
68+
```

deep_core/README.md

Lines changed: 41 additions & 157 deletions
Original file line numberDiff line numberDiff line change
@@ -1,162 +1,46 @@
11
# deep_core
22

3-
Core package for the Deep ROS inference framework providing abstract interfaces, tensor operations, and plugin architecture for machine learning inference in ROS 2.
3+
Core abstractions for ML inference in ROS 2 lifecycle nodes.
44

55
## Overview
66

7-
`deep_core` provides the foundational components for a modular, high-performance ML inference system:
8-
9-
- **Plugin interfaces** for backend inference engines and memory allocators
10-
- **Lifecycle node base class** with optional bond timers
11-
- **Generic TensorPtr Type** to interface large tensor data between ROS msgs and backend hardware accelerators without deepcopy operations
12-
13-
## Architecture
14-
15-
### Core Components
16-
17-
- **`TensorPtr`**: Smart pointer for multi-dimensional tensor data with pluggable memory allocators
18-
- **`DeepNodeBase`**: ROS 2 lifecycle node base class for inference services
19-
- **Plugin Interfaces**: Abstract base classes for backend implementations
20-
- `BackendMemoryAllocator`: Custom memory allocation strategies
21-
- `BackendInferenceExecutor`: ML framework inference execution
22-
- `DeepBackendPlugin`: Combined backend plugin interface
23-
24-
### Memory Management
25-
26-
The tensor system supports custom memory allocators for optimal performance:
27-
28-
```cpp
29-
// Create tensor with custom allocator
30-
auto allocator = get_custom_allocator();
31-
deep_ros::TensorPtr input({1, 3, 224, 224}, deep_ros::DataType::FLOAT32, allocator);
32-
```
33-
34-
### Plugin Architecture
35-
36-
Backend implementations are loaded dynamically using ROS 2 pluginlib:
37-
38-
```cpp
39-
// Load backend plugin
40-
if (!load_plugin("onnxruntime_cpu")) {
41-
RCLCPP_ERROR(get_logger(), "Failed to load backend plugin");
42-
}
43-
44-
// Run inference
45-
deep_ros::TensorPtr output = run_inference(input_tensor);
46-
```
47-
48-
## Usage
49-
50-
### Creating an Inference Node
51-
52-
```cpp
53-
#include <deep_core/deep_node_base.hpp>
54-
55-
class MyInferenceNode : public deep_ros::DeepNodeBase
56-
{
57-
public:
58-
MyInferenceNode(const rclcpp::NodeOptions & options)
59-
: DeepNodeBase("my_inference_node", options)
60-
{
61-
}
62-
63-
protected:
64-
CallbackReturn on_configure_impl(const rclcpp_lifecycle::State & state) override
65-
{
66-
// Custom configuration logic
67-
return CallbackReturn::SUCCESS;
68-
}
69-
70-
CallbackReturn on_activate_impl(const rclcpp_lifecycle::State & state) override
71-
{
72-
// Start inference services
73-
return CallbackReturn::SUCCESS;
74-
}
75-
};
76-
```
77-
78-
### Custom Memory Allocator
79-
80-
```cpp
81-
class MyCustomAllocator : public deep_ros::BackendMemoryAllocator
82-
{
83-
public:
84-
void * allocate(size_t bytes) override {
85-
// Custom allocation strategy (e.g., GPU memory, aligned allocation)
86-
return my_custom_malloc(bytes);
87-
}
88-
89-
void deallocate(void * ptr) override {
90-
my_custom_free(ptr);
91-
}
92-
93-
// Implement other required methods...
94-
};
95-
```
96-
97-
## Package Structure
98-
99-
```
100-
deep_core/
101-
├── include/deep_core/
102-
│ ├── deep_node_base.hpp # Lifecycle node base class
103-
│ ├── types/
104-
│ │ ├── tensor.hpp # TensorPtr class and data types
105-
│ │ └── data_type.hpp # Enum for tensor data types
106-
│ └── plugin_interfaces/
107-
│ ├── backend_memory_allocator.hpp # Memory allocator interface
108-
│ ├── backend_inference_executor.hpp # Inference executor interface
109-
│ └── deep_backend_plugin.hpp # Combined plugin interface
110-
├── src/
111-
│ ├── deep_node_base.cpp # Lifecycle node implementation
112-
│ └── tensor.cpp # TensorPtr operations
113-
└── CMakeLists.txt
114-
```
115-
116-
## Dependencies
117-
118-
- **ROS 2**: rclcpp, rclcpp_lifecycle
119-
- **pluginlib**: Dynamic plugin loading
120-
- **Standard C++17**: Modern C++ features
121-
122-
## Supported Data Types
123-
124-
- `FLOAT32`: 32-bit floating point
125-
- `INT32`: 32-bit signed integer
126-
- `INT64`: 64-bit signed integer
127-
- `UINT8`: 8-bit unsigned integer
128-
129-
## Backend Plugin Development
130-
131-
To create a new backend plugin:
132-
133-
1. Implement the three interfaces:
134-
- `BackendMemoryAllocator`
135-
- `BackendInferenceExecutor`
136-
- `DeepBackendPlugin`
137-
138-
2. Create a `plugins.xml` file:
139-
140-
```xml
141-
<library path="my_backend_plugin_lib">
142-
<class name="my_backend" type="my_namespace::MyBackendPlugin" base_class_type="deep_ros::DeepBackendPlugin">
143-
<description>My custom ML backend</description>
144-
</class>
145-
</library>
146-
```
147-
148-
1. Export the plugin in your `package.xml`:
149-
150-
```xml
151-
<export>
152-
<deep_ort_backend_plugin plugin="${prefix}/plugins.xml" />
153-
</export>
154-
```
155-
156-
## Examples
157-
158-
See the [`deep_ort_backend_plugin`](../deep_ort_backend_plugin/) package for a complete ONNX Runtime backend implementation.
159-
160-
## License
161-
162-
Licensed under the Apache License, Version 2.0.
7+
Provides:
8+
- `TensorPtr`: Smart pointer for tensor data with custom memory allocators
9+
- `DeepNodeBase`: Lifecycle node base class with plugin loading and optional bond support
10+
- Plugin interfaces for backend inference engines and memory management
11+
12+
## Key Components
13+
14+
### TensorPtr
15+
Multi-dimensional tensor smart pointer supporting:
16+
- Custom memory allocators (CPU/GPU/aligned memory)
17+
- View semantics (wrap existing data without copying)
18+
- Standard tensor operations (reshape, data access)
19+
20+
### DeepNodeBase
21+
Lifecycle node that handles:
22+
- Dynamic backend plugin loading via pluginlib
23+
- Model loading/unloading lifecycle
24+
- Optional bond connections for nav2 integration
25+
- Parameter-driven configuration
26+
27+
### Plugin Interfaces
28+
Deep_ROS abstracts away hardware acceleration interfaces as plugins. This means that users have the
29+
freedom to switch between different hardware accelerators at runtime. The backend plugin interface is
30+
as follows:
31+
- `DeepBackendPlugin`: Abstract interface for defining a backend plugin. Must implement:
32+
- `BackendMemoryAllocator`: Backend implementation for memory allocation and management
33+
- `BackendInferenceExecutor`: Backend implementation for running model inference
34+
35+
## Configuration
36+
37+
All nodes inherenting `deep_ros::DeepNodeBase` have the following settable parameters.
38+
39+
Required parameters:
40+
- `Backend.plugin`: Plugin name (e.g., "onnxruntime_cpu")
41+
- `model_path`: Path to model file
42+
43+
Optional parameters:
44+
- `Bond.enable`: Enable bond connections (default: false)
45+
- `Bond.bond_timeout`: Bond timeout in seconds (default: 4.0)
46+
- `Bond.bond_heartbeat_period`: Heartbeat period in seconds (default: 0.1)

deep_core/include/deep_core/deep_core.hpp

Lines changed: 0 additions & 44 deletions
This file was deleted.

0 commit comments

Comments
 (0)