Skip to content

Commit 6e36713

Browse files
committed
adding more information in readme for initial collaborative prototyping
1 parent 3f95427 commit 6e36713

File tree

1 file changed

+78
-0
lines changed

1 file changed

+78
-0
lines changed

README.md

Lines changed: 78 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,81 @@
11
# Deep ROS
22

33
Full ML infrastructure pipeline for ROS2. Includes inference model-agnostic node containers for quick deployment and testing of ML models, as well as sample model farm for building, training, evaluating, and quantizing neural networks.
4+
5+
## Installation
6+
7+
(TODO) Add the base library into the ROS buildfarm
8+
9+
```bash
10+
sudo apt install ros-${ROS_DISTRO}-deep-ros
11+
```
12+
13+
## Backend Plugin Installation
14+
To accomodate different hardware accelerators, deep ros has a library of installable plugins that deal with model loading, memory allocation, and inference for a specific hardware accelerator.
15+
16+
To configure the backend your node should run with, specify the backend in the node's parameters:
17+
18+
```yaml
19+
sample_inference_node:
20+
ros__parameters:
21+
# Backend configuration - TensorRT
22+
Backend:
23+
plugin: "onnxruntime_gpu"
24+
device_id: 0
25+
execution_provider: "tensorrt"
26+
```
27+
28+
Each backend has its own subset of parameters.
29+
30+
### `deep_ort_backend_plugin`
31+
ONNXRuntime CPU Backend. This comes with the base library.
32+
33+
In your package.xml
34+
35+
```xml
36+
<exec_depend>deep_ort_backend_plugin</exec_depend>
37+
```
38+
39+
Specify in your parameter file
40+
41+
```yaml
42+
```yaml
43+
sample_inference_node:
44+
ros__parameters:
45+
# Backend configuration - TensorRT
46+
Backend:
47+
plugin: "onnxruntime_cpu"
48+
```
49+
50+
### `deep_ort_gpu_backend_plugin`
51+
Nvidia libraries must be installed separately alongside this plugin. Once installed, `deep_ort_gpu_backend_plugin` will automatically link to the nvidia libraries at runtime.
52+
53+
#### Prerequisites
54+
Currently, `deep_ort_gpu_backend_plugin` supports the following nvidia configurations.
55+
56+
**TensorRT**: 10.9
57+
**CUDA and CuDNN**: 12.0 to 12.8
58+
59+
List of compatible `nvidia/cuda` images:
60+
- "12.8.0-cudnn-runtime-ubuntu22.04"
61+
- "12.6.2-cudnn-runtime-ubuntu22.04"
62+
- "12.5.1-cudnn-runtime-ubuntu22.04"
63+
- "12.4.1-cudnn-runtime-ubuntu22.04"
64+
- "12.3.2-cudnn-runtime-ubuntu22.04"
65+
- "12.2.2-cudnn8-runtime-ubuntu22.04"
66+
- "12.1.1-cudnn8-runtime-ubuntu22.04"
67+
- "12.0.1-cudnn8-runtime-ubuntu22.04"
68+
69+
To download the minimal libraries needed for TensorRT:
70+
71+
```bash
72+
curl -fsSL -o cuda-keyring_1.1-1_all.deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb \
73+
&& dpkg -i cuda-keyring_1.1-1_all.deb \
74+
&& apt-get update && apt-get install -y --no-install-recommends \
75+
libnvinfer10=10.9.0.34-1+cuda12.8 \
76+
libnvinfer-plugin10=10.9.0.34-1+cuda12.8 \
77+
libnvonnxparsers10=10.9.0.34-1+cuda12.8 \
78+
&& rm cuda-keyring_1.1-1_all.deb
79+
```
80+
81+
Note that `12.8` is compatible with all CUDA versions `12.0` to `12.8`

0 commit comments

Comments
 (0)