You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: manufacturing-ai-suite/hmi-augmented-worker/docs/user-guide/release-notes.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ Intel® Core™ i7-14700 based systems with Edge Microvisor Toolkit and Wind
15
15
16
16
**Documentation:**
17
17
18
-
Documentation is **completed**. [README.md](../../README.md) is updated with installation steps and reference documents.
18
+
Documentation is **completed**. [README.md](https://github.com/open-edge-platform/edge-ai-suites/blob/release-2026.0.0/manufacturing-ai-suite/hmi-augmented-worker/README.md) is updated with installation steps and reference documents.
Copy file name to clipboardExpand all lines: metro-ai-suite/live-video-analysis/live-video-captioning/docs/user-guide/get-started.md
+19-4Lines changed: 19 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,6 +3,7 @@
3
3
The Live Video Captioning sample application demonstrates real-time video captioning using Intel® DLStreamer and OpenVINO™. It processes RTSP video stream, applies video analytics pipelines for efficient decoding and inference, and leverages a Vision-Language Model(VLM) to generate live captions for the video content. In addition to captioning, the application provides performance metrics such as throughput and latency, enabling developers to evaluate and optimize end-to-end system performance for real-time scenarios.
4
4
5
5
By following this guide, you will learn how to:
6
+
6
7
-**Set up the sample application**: Use Docker Compose to quickly deploy the application in your environment.
7
8
-**Run the application**: Execute the application to see real-time captioning from your video stream.
8
9
-**Modify application parameters**: Customize settings like inference models and VLM parameters to adapt the application to your specific requirements.
@@ -12,14 +13,15 @@ By following this guide, you will learn how to:
12
13
- Verify that your system meets the minimum requirements. See [System Requirements](./get-started/system-requirements.md) for details.
- RTSP stream source (live camera or test feed) or simulated RTSP stream source using local video files.
16
-
- OpenVINO-compatible VLM in `ov_models/`. User may use the [script](../../download_models.sh) provided to prepare the model.
16
+
- RTSP stream source (live camera or test feed). To create a simulated RTSP test feed stream using existing video files, see the [Streamer readme](https://github.com/open-edge-platform/scenescape/tree/release-2026.0/tools/streamer).
17
+
- OpenVINO-compatible VLM in `ov_models/`. For convenience, use the [download models script](https://github.com/open-edge-platform/edge-ai-suites/blob/release-2026.0.0/metro-ai-suite/live-video-analysis/live-video-captioning/download_models.sh) provided to prepare the model.
17
18
- OpenVINO-compatible Object Detection Models in `ov_detection_models/`. This is only required
18
19
when object detection in the pipeline is enabled. Please refer to the [Object Detection Pipeline configuration](./object-detection-pipeline.md) guide for information on how to enable it.
@@ -28,19 +30,24 @@ when object detection in the pipeline is enabled. Please refer to the [Object De
28
30
>**Note:** Adjust the repo link appropriately incase of forked repo.
29
31
30
32
2. **Navigate to the Directory**:
33
+
31
34
```bash
32
35
cd edge-ai-suites/metro-ai-suite/live-video-analysis/live-video-captioning
33
36
```
34
37
35
38
3. **Configure Image Registry and Tag**:
39
+
36
40
```bash
37
41
export REGISTRY="intel/"
38
42
export TAG="1.0.0"
39
43
```
40
-
Skip this step if you prefer to build the sample applciation from source. For detailed instructions, refer to the [Build from Source](./get-started/build-from-source.md) guide for details.
44
+
45
+
Skip this step if you prefer to build the sample application from source. For detailed instructions, refer to the [Build from Source](./get-started/build-from-source.md) guide for details.
41
46
42
47
4. **Configure Environment**:
43
-
Create a `.env` file in the repository root:
48
+
49
+
Create an `.env` file in the repository root:
50
+
44
51
```bash
45
52
WHIP_SERVER_IP=mediamtx
46
53
WHIP_SERVER_PORT=8889
@@ -55,13 +62,16 @@ when object detection in the pipeline is enabled. Please refer to the [Object De
55
62
ALERT_MODE=False
56
63
ENABLE_DETECTION_PIPELINE=False
57
64
```
65
+
58
66
Notes:
59
67
- `HOST_IP` must be reachable by the browser client for WebRTC signaling.
60
68
- `PIPELINE_SERVER_URL` defaults to `http://dlstreamer-pipeline-server:8080`.
61
69
- `WEBRTC_BITRATE` controls the video bitrate in kbps for WebRTC streaming (default: 2048).
62
70
63
71
5. **Download/Export Models**:
72
+
64
73
Run the following scripts to download and convert VLM models.
@@ -80,7 +90,9 @@ when object detection in the pipeline is enabled. Please refer to the [Object De
80
90
```
81
91
82
92
6. **Start the Application**:
93
+
83
94
Start the application using Docker Compose tool:
95
+
84
96
```bash
85
97
docker compose up
86
98
```
@@ -98,12 +110,15 @@ when object detection in the pipeline is enabled. Please refer to the [Object De
98
110
>**Note:** If running in a proxy network, ensure that your RTSP stream URLs or IPs are added to the `no_proxy` environment variable to allow direct connections to the stream source without going through the proxy.
99
111
100
112
8. **Stop the Services**:
113
+
101
114
Stop the sample application services using below:
115
+
102
116
```bash
103
117
docker compose down
104
118
```
105
119
106
120
## Advanced Setup Options
121
+
107
122
For alternative ways to setup the application, see:
108
123
109
124
- [Build from Source](./get-started/build-from-source.md)
Copy file name to clipboardExpand all lines: metro-ai-suite/metro-vision-ai-app-recipe/loitering-detection/docs/user-guide/how-to-guides/use-gpu-for-inference.md
+4-1Lines changed: 4 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,6 +8,7 @@ For containerized applications built using the DL Streamer Pipeline Server, firs
8
8
provide GPU device(s) access to the container user.
9
9
10
10
### Provide GPU access to the container
11
+
11
12
This can be done by making the following changes to the docker compose file.
12
13
13
14
```yaml
@@ -22,10 +23,12 @@ services:
22
23
# you can add specific devices in case you don't want to provide access to all like below.
23
24
- "/dev:/dev"
24
25
```
26
+
25
27
The changes above adds the container user to the `render` group and provides access to the GPU
26
28
devices.
27
29
28
30
### Hardware specific encoder/decoders
31
+
29
32
Unlike the changes done for the container above, the following requires a modification to the
30
33
media pipeline itself.
31
34
@@ -52,7 +55,7 @@ DL Streamer document for selecting the GPU render device of your choice for VA c
52
55
> **Note:** This sample application already provides a default `compose-without-scenescape.yml`
53
56
> file that includes the necessary GPU access to the containers.
54
57
55
-
The pipeline `object_tracking_gpu` in [pipeline-server-config](../../../src/dlstreamer-pipeline-server/config.json)
58
+
The pipeline `object_tracking_gpu` in [pipeline-server-config](https://github.com/open-edge-platform/edge-ai-suites/blob/release-2026.0.0/metro-ai-suite/metro-vision-ai-app-recipe/loitering-detection/src/dlstreamer-pipeline-server/config.json)
56
59
contains GPU specific elements and uses GPU backend for inferencing. We can start the pipeline
Copy file name to clipboardExpand all lines: metro-ai-suite/metro-vision-ai-app-recipe/smart-parking/docs/user-guide/how-to-guides/use-gpu-for-inference.md
+5-1Lines changed: 5 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,12 +1,14 @@
1
1
# Use GPU for Inference
2
2
3
3
## Pre-requisites
4
+
4
5
In order to benefit from hardware acceleration, pipelines can be constructed in a manner that
5
6
different stages such as decoding, inference etc., can make use of these devices.
6
7
For containerized applications built using the DL Streamer Pipeline Server, first we need to
7
8
provide GPU device(s) access to the container user.
8
9
9
10
### Provide GPU access to the container
11
+
10
12
This can be done by making the following changes to the docker compose file.
11
13
12
14
```yaml
@@ -26,6 +28,7 @@ The changes above adds the container user to the `render` group and provides acc
26
28
GPU devices.
27
29
28
30
### Hardware specific encoder/decoders
31
+
29
32
Unlike the changes done for the container above, the following requires a modification to the
30
33
media pipeline itself.
31
34
@@ -39,6 +42,7 @@ pipeline by adding `video/x-raw(memory: VAMemory)` for Intel GPUs (integrated an
39
42
Read the DL Streamer [GPU Device Selection](https://docs.openedgeplatform.intel.com/2026.0/edge-ai-libraries/dlstreamer/dev_guide/gpu_device_selection.html) document for more details.
40
43
41
44
### GPU specific element properties
45
+
42
46
DL Streamer inference elements also provides property such as `device=GPU` and
43
47
`pre-process-backend=va-surface-sharing`to infer and pre-process on GPU. Read the DL Streamer
44
48
[Model Preparation](https://docs.openedgeplatform.intel.com/2026.0/edge-ai-libraries/dlstreamer/dev_guide/model_preparation.html#model-pre-and-post-processing) documentation for more information.
@@ -53,7 +57,7 @@ DL Streamer document for selecting the GPU render device of your choice for VA c
53
57
> **Note:** This sample application already provides a default `compose-without-scenescape.yml`
54
58
> file that includes the necessary GPU access to the containers.
55
59
56
-
The pipeline `yolov11s_gpu` in [pipeline-server-config](../../../src/dlstreamer-pipeline-server/config.json)
60
+
The pipeline `yolov11s_gpu` in [pipeline-server-config](https://github.com/open-edge-platform/edge-ai-suites/blob/release-2026.0.0/metro-ai-suite/metro-vision-ai-app-recipe/smart-parking/src/dlstreamer-pipeline-server/config.json)
57
61
contains GPU specific elements and uses GPU backend for inferencing. We can start the pipeline
0 commit comments