Skip to content

Commit 636b8fa

Browse files
committed
Merge branch 'docs/rkatakol/npu_docs_update' of https://github.com/open-edge-platform/edge-ai-suites into docs/rkatakol/npu_docs_update
2 parents c251b2e + eed9152 commit 636b8fa

File tree

4 files changed

+100
-0
lines changed

4 files changed

+100
-0
lines changed

metro-ai-suite/metro-vision-ai-app-recipe/loitering-detection/docs/user-guide/how-to-guides.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,7 @@ This section collects guides for the Loitering Detection sample application.
1313
1414
./how-to-guides/customize-application
1515
./how-to-guides/use-gpu-for-inference
16+
./how-to-guides/use-npu-for-inference
1617
./how-to-guides/view-telemetry-data
1718
./how-to-guides/benchmark
1819
Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
# Use NPU for Inference
2+
3+
## Pre-requisites
4+
5+
In order to benefit from hardware acceleration, pipelines can be constructed in a manner that
6+
different stages such as decoding, inference etc., can make use of these devices.
7+
For containerized applications built using the DL Streamer Pipeline Server, first we need to
8+
provide NPU device(s) access to the container user.
9+
10+
### Provide NPU access to the container
11+
This can be done by making the following changes to the docker compose file.
12+
13+
```yaml
14+
services:
15+
dlstreamer-pipeline-server:
16+
group_add:
17+
# render group ID for ubuntu 22.04 host OS
18+
- "110"
19+
# render group ID for ubuntu 24.04 host OS
20+
- "992"
21+
devices:
22+
# you can add specific devices in case you don't want to provide access to all like below.
23+
- "/dev:/dev"
24+
```
25+
The changes above adds the container user to the `render` group and provides access to the NPU
26+
devices.
27+
28+
### Hardware specific encoder/decoders
29+
Unlike the changes done for the container above, the following requires a modification to the
30+
media pipeline itself.
31+
32+
Gstreamer has a variety of hardware specific encoders and decoders elements such as Intel
33+
specific VA-API elements that you can benefit from by adding them into your media pipeline.
34+
Examples of such elements are `vah264dec`, `vah264enc`, `vajpegdec`, `vajpegdec`, etc.
35+
36+
## Tutorial on how to use NPU specific pipelines
37+
38+
> **Note:** This sample application already provides a default `compose-without-scenescape.yml`
39+
> file that includes the necessary NPU access to the containers.
40+
41+
The pipeline `object_tracking_npu` in [pipeline-server-config](../../../src/dlstreamer-pipeline-server/config.json)
42+
contains NPU specific elements and uses NPU backend for inferencing. We can start the pipeline
43+
as follows:
44+
45+
```sh
46+
./sample_start.sh npu
47+
```
48+
49+
Go to Grafana as explained in [Get Started](../get-started.md) to view the dashboard.

metro-ai-suite/metro-vision-ai-app-recipe/smart-parking/docs/user-guide/how-to-guides.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@ This section collects guides for the Smart Parking sample application.
1515
./how-to-guides/customize-application
1616
./how-to-guides/generate-offline-package
1717
./how-to-guides/use-gpu-for-inference
18+
./how-to-guides/use-npu-for-inference
1819
./how-to-guides/view-telemetry-data
1920
./how-to-guides/benchmark
2021
Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
# Use NPU for Inference
2+
3+
## Pre-requisites
4+
In order to benefit from hardware acceleration, pipelines can be constructed in a manner that
5+
different stages such as decoding, inference etc., can make use of these devices.
6+
For containerized applications built using the DL Streamer Pipeline Server, first we need to
7+
provide NPU device(s) access to the container user.
8+
9+
### Provide NPU access to the container
10+
This can be done by making the following changes to the docker compose file.
11+
12+
```yaml
13+
services:
14+
dlstreamer-pipeline-server:
15+
group_add:
16+
# render group ID for ubuntu 22.04 host OS
17+
- "110"
18+
# render group ID for ubuntu 24.04 host OS
19+
- "992"
20+
devices:
21+
# you can add specific devices in case you don't want to provide access to all like below.
22+
- "/dev:/dev"
23+
```
24+
25+
The changes above adds the container user to the `render` group and provides access to the
26+
NPU devices.
27+
28+
### Hardware specific encoder/decoders
29+
Unlike the changes done for the container above, the following requires a modification to the
30+
media pipeline itself.
31+
32+
Gstreamer has a variety of hardware specific encoders and decoders elements such as Intel
33+
specific VA-API elements that you can benefit from by adding them into your media pipeline.
34+
Examples of such elements are `vah264dec`, `vah264enc`, `vajpegdec`, `vajpegdec`, etc.
35+
36+
## Tutorial on how to use NPU specific pipelines
37+
38+
> **Note:** This sample application already provides a default `compose-without-scenescape.yml`
39+
> file that includes the necessary NPU access to the containers.
40+
41+
The pipeline `yolov11s_npu` in [pipeline-server-config](../../../src/dlstreamer-pipeline-server/config.json)
42+
contains NPU specific elements and uses NPU backend for inferencing. We can start the pipeline
43+
as follows:
44+
45+
```sh
46+
./sample_start.sh npu
47+
```
48+
49+
Go to Grafana as explained in [Get Started](../get-started.md) to view the dashboard.

0 commit comments

Comments
 (0)