Skip to content

Commit 00edccb

Browse files
Docs metro robotics review rel links 26 0 (#2410)
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
1 parent 2d1f4a4 commit 00edccb

File tree

18 files changed

+367
-319
lines changed

18 files changed

+367
-319
lines changed

manufacturing-ai-suite/hmi-augmented-worker/docs/user-guide/release-notes.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ Intel® Core&trade; i7-14700 based systems with Edge Microvisor Toolkit and Wind
1515

1616
**Documentation:**
1717

18-
Documentation is **completed**. [README.md](../../README.md) is updated with installation steps and reference documents.
18+
Documentation is **completed**. [README.md](https://github.com/open-edge-platform/edge-ai-suites/blob/release-2026.0.0/manufacturing-ai-suite/hmi-augmented-worker/README.md) is updated with installation steps and reference documents.
1919

2020
**Known Limitations and Issues:**
2121

metro-ai-suite/live-video-analysis/live-video-captioning/docs/user-guide/get-started.md

Lines changed: 19 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33
The Live Video Captioning sample application demonstrates real-time video captioning using Intel® DLStreamer and OpenVINO™. It processes RTSP video stream, applies video analytics pipelines for efficient decoding and inference, and leverages a Vision-Language Model(VLM) to generate live captions for the video content. In addition to captioning, the application provides performance metrics such as throughput and latency, enabling developers to evaluate and optimize end-to-end system performance for real-time scenarios.
44

55
By following this guide, you will learn how to:
6+
67
- **Set up the sample application**: Use Docker Compose to quickly deploy the application in your environment.
78
- **Run the application**: Execute the application to see real-time captioning from your video stream.
89
- **Modify application parameters**: Customize settings like inference models and VLM parameters to adapt the application to your specific requirements.
@@ -12,14 +13,15 @@ By following this guide, you will learn how to:
1213
- Verify that your system meets the minimum requirements. See [System Requirements](./get-started/system-requirements.md) for details.
1314
- Install Docker: [Installation Guide](https://docs.docker.com/get-docker/).
1415
- Install Docker Compose: [Installation Guide](https://docs.docker.com/compose/install/).
15-
- RTSP stream source (live camera or test feed) or simulated RTSP stream source using local video files.
16-
- OpenVINO-compatible VLM in `ov_models/`. User may use the [script](../../download_models.sh) provided to prepare the model.
16+
- RTSP stream source (live camera or test feed). To create a simulated RTSP test feed stream using existing video files, see the [Streamer readme](https://github.com/open-edge-platform/scenescape/tree/release-2026.0/tools/streamer).
17+
- OpenVINO-compatible VLM in `ov_models/`. For convenience, use the [download models script](https://github.com/open-edge-platform/edge-ai-suites/blob/release-2026.0.0/metro-ai-suite/live-video-analysis/live-video-captioning/download_models.sh) provided to prepare the model.
1718
- OpenVINO-compatible Object Detection Models in `ov_detection_models/`. This is only required
1819
when object detection in the pipeline is enabled. Please refer to the [Object Detection Pipeline configuration](./object-detection-pipeline.md) guide for information on how to enable it.
1920

2021
## Run the application
2122

2223
1. **Clone the repository**:
24+
2325
```bash
2426
# Clone the release branch
2527
git clone https://github.com/open-edge-platform/edge-ai-suites.git edges-ai-suites -b release-2026.0.0
@@ -28,19 +30,24 @@ when object detection in the pipeline is enabled. Please refer to the [Object De
2830
> **Note:** Adjust the repo link appropriately in case of forked repo.
2931

3032
2. **Navigate to the Directory**:
33+
3134
```bash
3235
cd edge-ai-suites/metro-ai-suite/live-video-analysis/live-video-captioning
3336
```
3437

3538
3. **Configure Image Registry and Tag**:
39+
3640
```bash
3741
export REGISTRY="intel/"
3842
export TAG="1.0.0"
3943
```
40-
Skip this step if you prefer to build the sample applciation from source. For detailed instructions, refer to the [Build from Source](./get-started/build-from-source.md) guide for details.
44+
45+
Skip this step if you prefer to build the sample application from source. For detailed instructions, refer to the [Build from Source](./get-started/build-from-source.md) guide for details.
4146

4247
4. **Configure Environment**:
43-
Create a `.env` file in the repository root:
48+
49+
Create an `.env` file in the repository root:
50+
4451
```bash
4552
WHIP_SERVER_IP=mediamtx
4653
WHIP_SERVER_PORT=8889
@@ -55,13 +62,16 @@ when object detection in the pipeline is enabled. Please refer to the [Object De
5562
ALERT_MODE=False
5663
ENABLE_DETECTION_PIPELINE=False
5764
```
65+
5866
Notes:
5967
- `HOST_IP` must be reachable by the browser client for WebRTC signaling.
6068
- `PIPELINE_SERVER_URL` defaults to `http://dlstreamer-pipeline-server:8080`.
6169
- `WEBRTC_BITRATE` controls the video bitrate in kbps for WebRTC streaming (default: 2048).
6270

6371
5. **Download/Export Models**:
72+
6473
Run the following scripts to download and convert VLM models.
74+
6575
```bash
6676
chmod +x download_models.sh
6777
./download_models.sh [internvl2_1B|gemma3|internvl2_2B]
@@ -80,7 +90,9 @@ when object detection in the pipeline is enabled. Please refer to the [Object De
8090
```
8191

8292
6. **Start the Application**:
93+
8394
Start the application using Docker Compose tool:
95+
8496
```bash
8597
docker compose up
8698
```
@@ -98,12 +110,15 @@ when object detection in the pipeline is enabled. Please refer to the [Object De
98110
> **Note:** If running in a proxy network, ensure that your RTSP stream URLs or IPs are added to the `no_proxy` environment variable to allow direct connections to the stream source without going through the proxy.
99111

100112
8. **Stop the Services**:
113+
101114
Stop the sample application services using below:
115+
102116
```bash
103117
docker compose down
104118
```
105119

106120
## Advanced Setup Options
121+
107122
For alternative ways to setup the application, see:
108123

109124
- [Build from Source](./get-started/build-from-source.md)

metro-ai-suite/metro-vision-ai-app-recipe/loitering-detection/docs/user-guide/how-to-guides/use-gpu-for-inference.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@ For containerized applications built using the DL Streamer Pipeline Server, firs
88
provide GPU device(s) access to the container user.
99

1010
### Provide GPU access to the container
11+
1112
This can be done by making the following changes to the docker compose file.
1213

1314
```yaml
@@ -22,10 +23,12 @@ services:
2223
# you can add specific devices in case you don't want to provide access to all like below.
2324
- "/dev:/dev"
2425
```
26+
2527
The changes above adds the container user to the `render` group and provides access to the GPU
2628
devices.
2729

2830
### Hardware specific encoder/decoders
31+
2932
Unlike the changes done for the container above, the following requires a modification to the
3033
media pipeline itself.
3134

@@ -52,7 +55,7 @@ DL Streamer document for selecting the GPU render device of your choice for VA c
5255
> **Note:** This sample application already provides a default `compose-without-scenescape.yml`
5356
> file that includes the necessary GPU access to the containers.
5457

55-
The pipeline `object_tracking_gpu` in [pipeline-server-config](../../../src/dlstreamer-pipeline-server/config.json)
58+
The pipeline `object_tracking_gpu` in [pipeline-server-config](https://github.com/open-edge-platform/edge-ai-suites/blob/release-2026.0.0/metro-ai-suite/metro-vision-ai-app-recipe/loitering-detection/src/dlstreamer-pipeline-server/config.json)
5659
contains GPU specific elements and uses GPU backend for inferencing. We can start the pipeline
5760
as follows:
5861

metro-ai-suite/metro-vision-ai-app-recipe/smart-parking/docs/user-guide/how-to-guides/use-gpu-for-inference.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,14 @@
11
# Use GPU for Inference
22

33
## Pre-requisites
4+
45
In order to benefit from hardware acceleration, pipelines can be constructed in a manner that
56
different stages such as decoding, inference etc., can make use of these devices.
67
For containerized applications built using the DL Streamer Pipeline Server, first we need to
78
provide GPU device(s) access to the container user.
89

910
### Provide GPU access to the container
11+
1012
This can be done by making the following changes to the docker compose file.
1113

1214
```yaml
@@ -26,6 +28,7 @@ The changes above adds the container user to the `render` group and provides acc
2628
GPU devices.
2729

2830
### Hardware specific encoder/decoders
31+
2932
Unlike the changes done for the container above, the following requires a modification to the
3033
media pipeline itself.
3134

@@ -39,6 +42,7 @@ pipeline by adding `video/x-raw(memory: VAMemory)` for Intel GPUs (integrated an
3942
Read the DL Streamer [GPU Device Selection](https://docs.openedgeplatform.intel.com/2026.0/edge-ai-libraries/dlstreamer/dev_guide/gpu_device_selection.html) document for more details.
4043

4144
### GPU specific element properties
45+
4246
DL Streamer inference elements also provides property such as `device=GPU` and
4347
`pre-process-backend=va-surface-sharing` to infer and pre-process on GPU. Read the DL Streamer
4448
[Model Preparation](https://docs.openedgeplatform.intel.com/2026.0/edge-ai-libraries/dlstreamer/dev_guide/model_preparation.html#model-pre-and-post-processing) documentation for more information.
@@ -53,7 +57,7 @@ DL Streamer document for selecting the GPU render device of your choice for VA c
5357
> **Note:** This sample application already provides a default `compose-without-scenescape.yml`
5458
> file that includes the necessary GPU access to the containers.
5559

56-
The pipeline `yolov11s_gpu` in [pipeline-server-config](../../../src/dlstreamer-pipeline-server/config.json)
60+
The pipeline `yolov11s_gpu` in [pipeline-server-config](https://github.com/open-edge-platform/edge-ai-suites/blob/release-2026.0.0/metro-ai-suite/metro-vision-ai-app-recipe/smart-parking/src/dlstreamer-pipeline-server/config.json)
5761
contains GPU specific elements and uses GPU backend for inferencing. We can start the pipeline
5862
as follows:
5963

0 commit comments

Comments
 (0)