You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/user-guide/other-topics/how-to-configure-dlstreamer-video-pipeline.md
+91-9Lines changed: 91 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# How to Configure DLStreamer Video Pipeline
2
2
3
-
## Video Pipeline Configuration in UI camera calibration page (in Kubernetes deployment)
3
+
## Video Pipeline Configuration in UI Camera Calibration Page (in Kubernetes Deployment)
4
4
5
5
When Intel® SceneScape is deployed in a Kubernetes environment, you can configure DLStreamer video pipelines directly through the camera calibration web interface. This provides a user-friendly way to generate and customize GStreamer pipelines for your cameras without manually editing configuration files.
6
6
@@ -17,17 +17,100 @@ In Kubernetes deployments, the camera calibration form provides access to a subs
17
17
18
18
#### Core Pipeline Fields
19
19
20
-
-**Camera (Video Source)**: specifies the video source command. Supported formats:
20
+
-**Camera (Video Source)**: Specifies the video source command. Supported formats:
- File sources: `file://video.ts` (relative to video folder).
24
-
-**Camera Chain**: defines the sequence or combination of AI models to chain together in the pipeline using their short identifiers (e.g., "retail"). Models can be chained serially (one after another) or in parallel arrangements. These identifiers are defined in the model configuration file with their detailed parameters needed for pipeline generation. The model identifier may be optionally followed by `=` and an inference device identifier, e.g., `retail=GPU` will configure the pipeline to run the model inference on GPU. If the inference device is not specified, CPU is used as the default. See [DLStreamer documentation](https://docs.openedgeplatform.intel.com/dev/edge-ai-libraries/dl-streamer/dev_guide/gpu_device_selection.html) for GPU device selection convention.
24
+
-**Camera Chain**: defines the sequence or combination of AI models to chain together in the pipeline using their short identifiers (e.g., "retail"). Models can be chained serially (one after another). For details on chaining syntax, available models, and usage examples, see the [Model Chaining](#model-chaining) section below.
25
+
-**Camera Pipeline**: The generated or custom GStreamer pipeline string
25
26
26
-
> **Note**: On systems with Intel GPU (either integrated or discrete), it is highly recommended to run both the decoding and the inference on GPU, so that other Intel® SceneScape services can fully benefit from available CPU cores.
27
+
#### Model Chaining
27
28
28
-
> **Note**: Currently, only limited model chaining is supported. See the limitations section below.
29
+
Model chaining allows you to combine multiple AI models in a single pipeline to create more sophisticated video analytics workflows. For example, you can chain a person detection model with a re-identification model to first detect people in the video and then generate unique identifiers for tracking.
29
30
30
-
-**Camera Pipeline**: The generated or custom GStreamer pipeline string
31
+
##### Prerequisites
32
+
33
+
By default, only a limited number of models is downloaded during helm chart installation, which limits the possibilities of model chaining. To enable the full set of models:
34
+
35
+
1. Set `initModels.modelType=all` in `kubernetes/scenescape-chart/values.yaml`.
36
+
2. Configure desired model precisions (e.g., `initModels.modelPrecisions=FP16`) in `kubernetes/scenescape-chart/values.yaml`.
37
+
3. (Re)deploy SceneScape to download the additional models.
38
+
39
+
##### Chaining Syntax
40
+
41
+
-**Serial chaining**: Use the `+` operator to chain models sequentially (e.g., `retail+reid`).
42
+
-**Device specification**: Optionally specify the inference device using `=` (e.g., `retail=GPU`). See [DLStreamer documentation](https://docs.openedgeplatform.intel.com/dev/edge-ai-libraries/dl-streamer/dev_guide/gpu_device_selection.html) for GPU device selection convention.
43
+
-**Default device**: If no device is specified, CPU is used as the default.
44
+
45
+
> **Note**: On systems with Intel GPU (either integrated or discrete), it is highly recommended to run both the decoding and the inference on GPU, so that other Intel® SceneScape services can fully benefit from available CPU cores. GPU inference typically provides better performance for complex models.
46
+
47
+
**Example**: `retail=GPU+reid=GPU` runs person detection on GPU, then feeds the results to person re-identification also running on GPU.
48
+
49
+
##### Available Models
50
+
51
+
Use the following short names to refer to each model in the chain:
52
+
53
+
| Category | Full Model Name | Short Name | Description |
|**Text Analysis**| horizontal-text-detection-0001 | textdetect | Text detection |
73
+
|| text-recognition-0012 | textrec | Text recognition |
74
+
|| text-recognition-resnet-fc | textresnet | ResNet-based text recognition |
75
+
76
+
##### Common Chaining Patterns
77
+
78
+
**Person Analytics Workflows:**
79
+
80
+
```
81
+
# Basic person detection with re-identification
82
+
retail+reid
83
+
84
+
# Person detection with attributes analysis
85
+
retail+personattr
86
+
87
+
# Person detection with age/gender classification
88
+
retail=GPU+agegender=GPU
89
+
90
+
```
91
+
92
+
**Vehicle Analytics Workflows:**
93
+
94
+
```
95
+
# Vehicle detection with re-identification
96
+
veh0200=GPU+reid=GPU
97
+
98
+
# Vehicle detection with attributes
99
+
veh0200+vehattr
100
+
101
+
# Vehicle detection with license plate detection
102
+
veh0200+platedetect
103
+
```
104
+
105
+
**Multi-Class Detection:**
106
+
107
+
```
108
+
# Detect people, vehicles, and bikes
109
+
pvb2000=GPU
110
+
111
+
# Multi-class detection with re-identification
112
+
pvb2000=GPU+reid=GPU
113
+
```
31
114
32
115
#### Advanced Configuration
33
116
@@ -36,7 +119,7 @@ In Kubernetes deployments, the camera calibration form provides access to a subs
36
119
37
120
> **Note**: The `AUTO` setting for decode device does not assume the optimal setting in each possible case. There might be cases when the optimal configuration can be achieved by setting the decode device manually.
38
121
39
-
> **Note**: The Model Config field references configuration files that define AI model parameters and processing settings. See [Model Configuration File Format](model-configuration-file-format.md) for more details.
122
+
> **Note**: The Model Config field references configuration files that define AI model parameters and processing settings. The default configuration file `model_config.json` is auto-generated for the models downloaded by the SceneScape model installer. See [Model Configuration File Format](model-configuration-file-format.md) for more details on the file format and when/how it should be updated.
40
123
41
124
#### Camera Intrinsics and Distortion
42
125
@@ -101,12 +184,11 @@ After generating a pipeline preview, you can make manual adjustments:
101
184
102
185
### Limitations
103
186
104
-
-Multiple model chaining is not supported yet. Only a single detection model can be used as **Camera Chain**.
187
+
-Only serial chaining of detectors with classification or re-identification models is supported in the **Camera Chain** field, where the ROI from the detection model serves as input to the classification or re-identification model in the chain. Serial chaining of two or more detectors is not supported (e.g. vehicle detector → license plate detector → OCR). Parallel inference on multiple models is not yet supported.
105
188
- Distortion correction is temporarily disabled due to a bug in DLStreamer-Pipeline-Server.
106
189
- Explicit frame rate and resolution configuration is not available yet.
107
190
- Network instability and camera disconnects are not handled gracefully for network-based streams (RTSP/HTTP/HTTPS) and may cause the pipeline to fail.
108
191
- Cross-stream batching is not supported since in Intel® SceneScape Kubernetes deployment each camera pipeline is running in a separate Pod.
109
-
- The input format section in the model config JSON file is currently ignored. This results in GStreamer automatically finding the best possible input format for a model. If this is not sufficient, edit the pipeline string directly in the UI **Camera Pipeline** field to set arbitrary video formats.
110
192
- Direct selection of a specific GPU as decode device on systems with multiple GPUs is not supported. As a workaround, use specific GStreamer elements in the **Camera Pipeline** field according to [DLStreamer documentation](https://docs.openedgeplatform.intel.com/dev/edge-ai-libraries/dl-streamer/dev_guide/gpu_device_selection.html).
0 commit comments