You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
DL Streamer provides support for OpenVINO™ custom operations through the `ov-extension-lib` parameter. This feature enables the use of models with custom operations that are not natively supported by OpenVINO™ Runtime, by loading extension libraries that define these custom operations.
6
+
7
+
Custom operations may be required in two scenarios:
8
+
9
+
1.**New or rarely used operations** - Operations from frameworks (TensorFlow, PyTorch, ONNX, etc.) that are not yet supported in OpenVINO™
10
+
2.**User-defined operations** - Custom operations created specifically for a model using framework extension capabilities
11
+
12
+
The `ov-extension-lib` parameter is available in the following DLStreamer elements:
13
+
14
+
-`gvadetect` - Object detection
15
+
-`gvaclassify` - Object classification
16
+
-`gvainference` - Generic inference
17
+
18
+
## Prerequisites
19
+
20
+
Before using custom operations, you need:
21
+
22
+
1.**OpenVINO™ Extension Library** - A compiled `.so` file (on Linux) containing the implementation of custom operations
23
+
2.**Model with Custom Operations** - An OpenVINO™ IR model that uses the custom operations defined in the extension library
24
+
25
+
For information on creating OpenVINO™ extension libraries, refer to the [OpenVINO™ Extensibility documentation](https://docs.openvino.ai/2025/documentation/openvino-extensibility.html).
26
+
27
+
## Usage
28
+
29
+
### Basic Usage
30
+
31
+
To use a model with custom operations, specify the path to the extension library using the `ov-extension-lib` parameter:
Copy file name to clipboardExpand all lines: libraries/dl-streamer/docs/source/elements/gvaclassify.md
+4-1Lines changed: 4 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -119,6 +119,9 @@ no-block : (Experimental) Option to help maintain frames per second o
119
119
object-class : Filter for Region of Interest class label on this element input
120
120
flags: readable, writable
121
121
String. Default: null
122
+
ov-extension-lib : Path to the .so file defining custom OpenVINO operations.
123
+
flags: readable, writable
124
+
String. Default: null
122
125
parent : The parent of the object
123
126
flags: readable, writable
124
127
Object of type "GstObject"
@@ -150,7 +153,7 @@ reshape-width : Width to which the network will be reshaped.
150
153
scale-method : Scale method to use in pre-preprocessing before inference. Only default and scale-method=fast (VAAPI based) supported in this element
151
154
flags: readable, writable
152
155
String. Default: null
153
-
scheduling-policy : Scheduling policy across streams sharing same model instance: throughput (select first incoming frame), latency (select frames with earliest presentation time)
156
+
scheduling-policy : Scheduling policy across streams sharing same model instance: throughput (select first incoming frame), latency (select frames with earliest presentation time out of the streams sharing same model-instance-id; recommended batch-size less than or equal to the number of streams)
Copy file name to clipboardExpand all lines: libraries/dl-streamer/docs/source/elements/gvadetect.md
+5-2Lines changed: 5 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -120,6 +120,9 @@ Element Properties:
120
120
object-class : Filter for Region of Interest class label on this element input
121
121
flags: readable, writable
122
122
String. Default: null
123
+
ov-extension-lib : Path to the .so file defining custom OpenVINO operations.
124
+
flags: readable, writable
125
+
String. Default: null
123
126
parent : The parent of the object
124
127
flags: readable, writable
125
128
Object of type "GstObject"
@@ -144,9 +147,9 @@ Element Properties:
144
147
scale-method : Scale method to use in pre-preprocessing before inference. Only default and scale-method=fast (VAAPI based) supported in this element
145
148
flags: readable, writable
146
149
String. Default: null
147
-
scheduling-policy : Scheduling policy across streams sharing same model instance: throughput (select first incoming frame), latency (select frames with earliest presentation time)
150
+
scheduling-policy : Scheduling policy across streams sharing same model instance: throughput (select first incoming frame), latency (select frames with earliest presentation time out of the streams sharing same model-instance-id; recommended batch-size less than or equal to the number of streams)
148
151
flags: readable, writable
149
-
String. Default: null Write only
152
+
String. Default: null
150
153
threshold : Threshold for detection results. Only regions of interest with confidence values above the threshold will be added to the frame
Copy file name to clipboardExpand all lines: libraries/dl-streamer/docs/source/elements/gvainference.md
+36-28Lines changed: 36 additions & 28 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,9 +2,9 @@
2
2
3
3
Runs deep learning inference using any model with RGB or BGR input.
4
4
5
-
```sh
5
+
```none
6
6
Pad Templates:
7
-
SRC template: 'src'
7
+
SINK template: 'sink'
8
8
Availability: Always
9
9
Capabilities:
10
10
video/x-raw
@@ -13,7 +13,7 @@ Pad Templates:
13
13
height: [ 1, 2147483647 ]
14
14
framerate: [ 0/1, 2147483647/1 ]
15
15
video/x-raw(memory:DMABuf)
16
-
format: { (string)RGBA, (string)I420 }
16
+
format: { (string)DMA_DRM }
17
17
width: [ 1, 2147483647 ]
18
18
height: [ 1, 2147483647 ]
19
19
framerate: [ 0/1, 2147483647/1 ]
@@ -28,7 +28,7 @@ Pad Templates:
28
28
height: [ 1, 2147483647 ]
29
29
framerate: [ 0/1, 2147483647/1 ]
30
30
31
-
SINK template: 'sink'
31
+
SRC template: 'src'
32
32
Availability: Always
33
33
Capabilities:
34
34
video/x-raw
@@ -37,7 +37,7 @@ Pad Templates:
37
37
height: [ 1, 2147483647 ]
38
38
framerate: [ 0/1, 2147483647/1 ]
39
39
video/x-raw(memory:DMABuf)
40
-
format: { (string)RGBA, (string)I420 }
40
+
format: { (string)DMA_DRM }
41
41
width: [ 1, 2147483647 ]
42
42
height: [ 1, 2147483647 ]
43
43
framerate: [ 0/1, 2147483647/1 ]
@@ -56,8 +56,10 @@ Element has no clocking capabilities.
56
56
Element has no URI handling capabilities.
57
57
58
58
Pads:
59
-
SRC: 'src'
60
59
SINK: 'sink'
60
+
Pad Template: 'sink'
61
+
SRC: 'src'
62
+
Pad Template: 'src'
61
63
62
64
Element Properties:
63
65
batch-size : Number of frames batched together for a single inference. If the batch-size is 0, then it will be set by default to be optimal for the device. Not all models support batching. Use model optimizer to ensure that the model has batching support.
@@ -66,12 +68,15 @@ Element Properties:
66
68
cpu-throughput-streams: Deprecated. Use ie-config=CPU_THROUGHPUT_STREAMS=<number-streams> instead
custom-postproc-lib : Path to the .so file defining custom model output converter. The library must implement the Convert function: void Convert(GstTensorMeta *outputTensors, const GstStructure *network, const GstStructure *params, GstAnalyticsRelationMeta *relationMeta);
72
+
flags: readable, writable
73
+
String. Default: null
74
+
custom-preproc-lib : Path to the .so file defining custom input image pre-processing
75
+
flags: readable, writable
76
+
String. Default: null
69
77
device : Target device for inference. Please see OpenVINO™ Toolkit documentation for list of supported devices.
70
78
flags: readable, writable
71
79
String. Default: "CPU"
72
-
device-extensions : Comma separated list of KEY=VALUE pairs specifying the Inference Engine extension for a device
73
-
flags: readable, writable
74
-
String. Default: ""
75
80
gpu-throughput-streams: Deprecated. Use ie-config=GPU_THROUGHPUT_STREAMS=<number-streams> instead
(0): full-frame - Perform inference for full frame
88
93
(1): roi-list - Perform inference for roi list
89
94
labels : Array of object classes. It could be set as the following example: labels=<label1,label2,label3>
90
95
flags: readable, writable
91
-
String. Default: ""
96
+
String. Default: null
92
97
labels-file : Path to .txt file containing object classes (one per line)
93
98
flags: readable, writable
94
99
String. Default: null
95
100
model : Path to inference model network file
96
101
flags: readable, writable
97
-
String. Default: ""
98
-
model-instance-id : Identifier for sharing resources between inference elements of the same type. Elements with the instance-id will share model and other properties. If not specified, a unique identifier will be generated.
99
-
flags: readable, writable
100
-
String. Default: ""
101
-
scheduling-policy : Scheduling policy across streams sharing same model instance: throughput (select first incoming frame), latency (select frames with earliest presentation time).
102
+
String. Default: null
103
+
model-instance-id : Identifier for sharing a loaded model instance between elements of the same type. Elements with the same model-instance-id will share all model and inference engine related properties
102
104
flags: readable, writable
103
-
String. Default: "throughput"
105
+
String. Default: null
104
106
model-proc : Path to JSON file with description of input/output layers pre-processing/post-processing
105
107
flags: readable, writable
106
-
String. Default: ""
108
+
String. Default: null
107
109
name : The name of the object
108
-
flags: readable, writable, 0x2000
109
-
String. Default: "gvainferencebin0"
110
+
flags: readable, writable
111
+
String. Default: "gvainference0"
110
112
nireq : Number of inference requests
111
113
flags: readable, writable
112
114
Unsigned Integer. Range: 0 - 1024 Default: 0
113
115
no-block : (Experimental) Option to help maintain frames per second of incoming stream. Skips inference on an incoming frame if all inference requests are currently processing outstanding frames
114
-
flags: readable, writable
116
+
flags: readable, writable, deprecated
115
117
Boolean. Default: false
116
118
object-class : Filter for Region of Interest class label on this element input
117
119
flags: readable, writable
118
-
String. Default: ""
120
+
String. Default: null
121
+
ov-extension-lib : Path to the .so file defining custom OpenVINO operations.
122
+
flags: readable, writable
123
+
String. Default: null
119
124
parent : The parent of the object
120
-
flags: readable, writable, 0x2000
125
+
flags: readable, writable
121
126
Object of type "GstObject"
122
-
pre-process-backend : Select a pre-processing method (color conversion, resize and crop), one of 'ie', 'opencv', 'va', 'va-surface-sharing'. If not set, it will be selected automatically: 'va'for VAMemory and DMABuf, 'ie'for SYSTEM memory.
127
+
pre-process-backend : Select a pre-processing method (color conversion, resize and crop), one of 'ie', 'opencv', 'va', 'va-surface-sharing, 'vaapi', 'vaapi-surface-sharing'. If not set, it will be selected automatically: 'va' for VAMemory and DMABuf, 'ie' for SYSTEM memory.
123
128
flags: readable, writable
124
129
String. Default: ""
125
-
qos : Handle Quality-of-Service events
126
-
flags: readable, writable
127
-
Boolean. Default: false
128
130
pre-process-config : Comma separated list of KEY=VALUE parameters for image processing pipeline configuration
129
131
flags: readable, writable
130
132
String. Default: ""
133
+
qos : Handle Quality-of-Service events
134
+
flags: readable, writable
135
+
Boolean. Default: false
131
136
reshape : If true, model input layer will be reshaped to resolution of input frames (no resize operation before inference). Note: this feature has limitations, not all network supports reshaping.
scale-method : Scale method to use in pre-preprocessing before inference. Only default and scale-method=fast (VAAPI based) supported in this element
141
146
flags: readable, writable
142
-
String. Default: null Write only
147
+
String. Default: null
148
+
scheduling-policy : Scheduling policy across streams sharing same model instance: throughput (select first incoming frame), latency (select frames with earliest presentation time out of the streams sharing same model-instance-id; recommended batch-size less than or equal to the number of streams)
0 commit comments