You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The node supports explicit provider selection with fail-fast behavior. If the specified provider is unavailable or fails to initialize, the node will immediately throw an error (no silent fallbacks).
261
+
The node uses a plugin-based architecture for backend selection. You must specify which backend plugin to use via the `Backend.plugin` parameter.
191
262
192
-
**Available Providers:**
193
-
- `tensorrt`- TensorRT execution provider (requires CUDA and TensorRT)
194
-
- `cuda`- CUDA execution provider (requires CUDA)
195
-
- `cpu`- CPU execution provider (always available)
263
+
**Available Plugins:**
264
+
- `onnxruntime_cpu`- CPU backend (always available)
265
+
- `onnxruntime_gpu`- GPU backend supporting CUDA and TensorRT (requires CUDA)
266
+
267
+
For GPU plugin, you can specify the execution provider via `Backend.execution_provider`:
@@ -210,6 +284,15 @@ The node automatically detects and adapts to various ONNX model output formats:
210
284
211
285
## Parameters
212
286
287
+
**Important:** All string parameters must be specified in **lowercase**. The node performs direct string comparisons and does not normalize case. For example:
288
+
- `model.bbox_format`must be `"cxcywh"`, `"xyxy"`, or `"xywh"` (not `"CXCYWH"`)
289
+
- `preprocessing.normalization_type`must be `"imagenet"`, `"scale_0_1"`, `"custom"`, or `"none"`
290
+
- `preprocessing.resize_method`must be `"letterbox"`, `"resize"`, `"crop"`, or `"pad"`
291
+
- `postprocessing.score_activation`must be `"sigmoid"`, `"softmax"`, or `"none"`
292
+
- `postprocessing.class_score_mode`must be `"all_classes"` or `"single_confidence"`
293
+
- `Backend.plugin`must be `"onnxruntime_cpu"` or `"onnxruntime_gpu"`
294
+
- `Backend.execution_provider`must be `"cuda"` or `"tensorrt"` (for GPU plugin)
295
+
213
296
### Required Parameters
214
297
- **`model_path`** (string): Absolute path to ONNX model file (e.g., `/workspaces/deep_ros/yolov8m.onnx`).
215
298
- **`input_topic`** (string): MultiImage topic name to subscribe to.
@@ -223,9 +306,11 @@ The node automatically detects and adapts to various ONNX model output formats:
- **`min_batch_size`** (int, default: 1): Minimum images before processing.
228
-
- **`max_batch_size`** (int, default: 3): Maximum images per batch.
309
+
- **`Backend.plugin`** (string, required): Backend plugin name (`"onnxruntime_cpu"` or `"onnxruntime_gpu"`).
310
+
- **`Backend.execution_provider`** (string, default: "tensorrt"): Execution provider for GPU plugin (`"cuda"` or `"tensorrt"`). Only used with `onnxruntime_gpu` plugin.
311
+
- **`Backend.device_id`** (int, default: 0): GPU device ID (for CUDA/TensorRT).
0 commit comments