Skip to content
Open
Changes from 17 commits
Commits
Show all changes
20 commits
Select commit Hold shift + click to select a range
035a179
add user guide for AI app
rpandya28 Feb 17, 2026
7ef0c09
Update user_guide_dlstreamer_docker_based_AI_app.md
rpandya28 Feb 17, 2026
1341f4c
Update user_guide_dlstreamer_docker_based_AI_app.md
rpandya28 Feb 18, 2026
0bca352
Update user_guide_dlstreamer_docker_based_AI_app.md
rpandya28 Feb 18, 2026
ec3c73f
Update docs/tutorial/user_guide_dlstreamer_docker_based_AI_app.md
rpandya28 Feb 18, 2026
1c9383f
Update docs/tutorial/user_guide_dlstreamer_docker_based_AI_app.md
rpandya28 Feb 18, 2026
e58b9a2
Update docs/tutorial/user_guide_dlstreamer_docker_based_AI_app.md
rpandya28 Feb 18, 2026
6df96e8
Update user_guide_dlstreamer_docker_based_AI_app.md
rpandya28 Feb 23, 2026
bd01fac
Update docs/tutorial/user_guide_dlstreamer_docker_based_AI_app.md
rpandya28 Mar 3, 2026
8af3ae3
Update docs/tutorial/user_guide_dlstreamer_docker_based_AI_app.md
rpandya28 Mar 3, 2026
b428332
Update docs/tutorial/user_guide_dlstreamer_docker_based_AI_app.md
rpandya28 Mar 3, 2026
b74f8d9
Apply suggestion from @wiwaszko-intel
rpandya28 Mar 3, 2026
0248ef5
Update user_guide_dlstreamer_docker_based_AI_app.md
rpandya28 Mar 3, 2026
e70741c
Merge branch 'main' into add-face-detection-doc
rpandya28 Mar 3, 2026
1996147
Apply suggestion from @Copilot
rpandya28 Mar 3, 2026
e19a198
Update user_guide_dlstreamer_docker_based_AI_app.md
rpandya28 Mar 3, 2026
494adf0
Update user_guide_dlstreamer_docker_based_AI_app.md
rpandya28 Mar 3, 2026
ecb43fe
Update user_guide_dlstreamer_docker_based_AI_app.md
rpandya28 Mar 6, 2026
c41d35c
Update user_guide_dlstreamer_docker_based_AI_app.md
rpandya28 Mar 6, 2026
57568f9
Update user_guide_dlstreamer_docker_based_AI_app.md
rpandya28 Mar 6, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
224 changes: 224 additions & 0 deletions docs/tutorial/user_guide_dlstreamer_docker_based_AI_app.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,224 @@
# Example Workload: Docker‑Based Chat Q&A, Face Recognition & DL Streamer Face Detection

Comment thread
rpandya28 marked this conversation as resolved.
Comment thread
rpandya28 marked this conversation as resolved.
*An example guide for running Docker-based AI workloads (Ollama chat, face recognition, DL Streamer face detection) on a Linux image created with OS Image Composer.*
Comment on lines +1 to +3
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tutorial filename uses underscores and mixed casing ("user_guide_dlstreamer_docker_based_AI_app.md"), while other files under docs/tutorial use kebab-case (e.g., "usage-guide.md", "configure-secure-boot.md"). Consider renaming this file to kebab-case for consistency and easier linking/discoverability (e.g., "user-guide-dlstreamer-docker-based-ai-app.md").

Copilot uses AI. Check for mistakes.

This tutorial assumes you have already built a base OS image using **OS Image Composer** and want to validate it or extend it with containerized edge‑AI workloads. For full details of the AI applications themselves (models, pipelines, etc.), refer to the corresponding guides in the `edge-ai-libraries` repository; this document focuses on how to deploy and run them on your composed image, including typical proxy and Docker configuration steps.
---

## 1. Prerequisites
- Add the DL Streamer and Docker-based packages in the OS Image Composer build configuration using its [Multiple Package Repository Support](../architecture/os-image-composer-multi-repo-support.md) feature

After the image is built and the system is booted, verify the runtime environment.
First, confirm that Docker Engine is installed and running.
Next, confirm that Intel DL Streamer is installed by checking that the directory '/opt/intel/dlstreamer' exists.

---

## 2. Proxy Configuration (Generic Templates)

> Replace placeholders with your organization values:
>
> - `<HTTP_PROXY_URL>` — e.g., `http://proxy.example.com:8080`
> - `<HTTPS_PROXY_URL>` — e.g., `http://proxy.example.com:8443`
> - `<NO_PROXY_LIST>` — e.g., `localhost,127.0.0.1,::1,*.example.com,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,/var/run/docker.sock`

### 2.1 Configure Docker Engine (systemd)
Comment thread
rpandya28 marked this conversation as resolved.

```bash
sudo mkdir -p /etc/systemd/system/docker.service.d

sudo tee /etc/systemd/system/docker.service.d/http-proxy.conf <<'EOF'
[Service]
Environment="HTTP_PROXY=<HTTP_PROXY_URL>"
Environment="HTTPS_PROXY=<HTTPS_PROXY_URL>"
Environment="NO_PROXY=<NO_PROXY_LIST>"
EOF

sudo systemctl daemon-reload
sudo systemctl restart docker
```

### 2.2 Configure Docker CLI (`~/.docker/config.json`)
Comment thread
rpandya28 marked this conversation as resolved.

```bash
mkdir -p ~/.docker

tee ~/.docker/config.json <<'EOF'
{
"proxies": {
"default": {
"httpProxy": "<HTTP_PROXY_URL>",
"httpsProxy": "<HTTPS_PROXY_URL>",
"noProxy": "<NO_PROXY_LIST>"
}
}
}
EOF
```

---

## 3. Chat Q&A with Ollama (Docker)

### 3.1 Start the container

```bash
sudo docker run -d --name ollama \
--mount source=ollama-data,target=/root/.ollama \
--memory="4g" --cpus="1" \
-e HTTP_PROXY="<HTTP_PROXY_URL>" \
-e HTTPS_PROXY="<HTTPS_PROXY_URL>" \
-e NO_PROXY="<NO_PROXY_LIST>" \
ollama/ollama
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Docker images in the examples are referenced without an explicit tag or digest (e.g., "ollama/ollama"), which can lead to non-reproducible behavior and increases supply-chain risk if the "latest" tag changes. Prefer pinning to a specific version tag (or digest) and/or add a brief note explaining which version was validated.

Suggested change
ollama/ollama
ollama/ollama:0.3.14 # Example validated version; update tag as needed

Copilot uses AI. Check for mistakes.
```

### 3.2 Pull a lightweight model

```bash
sudo docker exec -it ollama ollama pull llama3.2:1b
```

### 3.3 Start interactive chat

```bash
sudo docker exec -it ollama ollama run llama3.2:1b
```

> Tip: For one-shot queries, you can pass a prompt: `ollama run llama3.2:1b -p "Hello"`

---

## 4. Basic Face Recognition (Docker)

### 4.1 Run the container and enter shell

```bash
sudo docker run -it aaftio/face_recognition /bin/bash
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similarly, the face recognition example uses an unpinned third-party image ("aaftio/face_recognition") with no tag/digest. Pinning the image (and optionally linking to the upstream repo or noting the expected image provenance) would make the tutorial more reproducible and reduce the risk of pulling an unexpected image.

Suggested change
sudo docker run -it aaftio/face_recognition /bin/bash
sudo docker run -it aaftio/face_recognition:latest /bin/bash

Copilot uses AI. Check for mistakes.
```

### 4.2 Prepare folders & sample images (inside container)

```bash
mkdir -p /images/known /images/unknown

# Known faces
wget -P /images/known https://raw.githubusercontent.com/ageitgey/face_recognition/master/examples/biden.jpg
wget -P /images/known https://raw.githubusercontent.com/ageitgey/face_recognition/master/examples/obama.jpg

# Unknown images
wget -P /images/unknown https://raw.githubusercontent.com/ageitgey/face_recognition/master/examples/two_people.jpg
wget -P /images/unknown https://raw.githubusercontent.com/ageitgey/face_recognition/master/examples/alex-lacamoire.png
Comment thread
rpandya28 marked this conversation as resolved.

# Note: For production or security-sensitive environments, verify downloaded files.
# Example (replace <EXPECTED_SHA256_* > with known-good hashes):
# echo "<EXPECTED_SHA256_BIDEN> /images/known/biden.jpg" | sha256sum -c -
# echo "<EXPECTED_SHA256_OBAMA> /images/known/obama.jpg" | sha256sum -c -
# echo "<EXPECTED_SHA256_TWO_PEOPLE> /images/unknown/two_people.jpg" | sha256sum -c -
# echo "<EXPECTED_SHA256_ALEX_LACAMOIRE> /images/unknown/alex-lacamoire.png" | sha256sum -c -
```

### 4.3 Match faces (inside container)

```bash
face_recognition /images/known /images/unknown/alex-lacamoire.png
face_recognition /images/known /images/unknown/two_people.jpg
```

---

## 5. DL Streamer – Face Detection Pipeline
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The heading contains an en dash ("–") and trailing whitespace. For better consistency and terminal/markdown compatibility, prefer a normal hyphen "-" and remove the trailing space.

Suggested change
## 5. DL Streamer Face Detection Pipeline
## 5. DL Streamer - Face Detection Pipeline

Copilot uses AI. Check for mistakes.

Run face detection on a video file using **Open Model Zoo**’s *Face Detection ADAS‑0001* model.

### 5.1 Environment (DL Streamer)

```bash
export GST_PLUGIN_PATH=/opt/intel/dlstreamer/lib:/opt/intel/dlstreamer/gstreamer/lib/gstreamer-1.0:/opt/intel/dlstreamer/streamer/lib/

export LD_LIBRARY_PATH=/opt/intel/dlstreamer/gstreamer/lib:/opt/intel/dlstreamer/lib:/opt/intel/dlstreamer/lib/gstreamer-1.0:/usr/lib:/usr/local/lib/gstreamer-1.0:/usr/local/lib

export PATH=/opt/intel/dlstreamer/gstreamer/bin:/opt/intel/dlstreamer/bin:$PATH

export MODELS_PATH=/home/${USER}/intel/models
```

Verify plugins:
```bash
gst-inspect-1.0 | grep -E "gvadetect|gvawatermark|gvatrack|gvaclassify"
```

### 5.2 Download the model (OMZ tools)

> If you don’t have OMZ tools: `python3 -m pip install openvino-dev` (use a venv if your distro enforces PEP 668).

```bash
omz_downloader --name face-detection-adas-0001
```

The IR will be available at (example):
```
~/intel/face-detection-adas-0001/FP32/face-detection-adas-0001.xml
```
Comment on lines +140 to +161
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The model directory/path guidance is inconsistent: you export MODELS_PATH as "/home/${USER}/intel/models", but the next section says OMZ outputs to "~/intel/face-detection-adas-0001/...". Consider either (1) setting omz_downloader --output_dir "$MODELS_PATH" and updating the example paths accordingly, or (2) updating MODELS_PATH to match the documented output location.

Copilot uses AI. Check for mistakes.

### 5.3 Run the pipeline (save to WebM)

```bash
gst-launch-1.0 filesrc location=/path/to/face-demographics-walking.mp4 ! \
decodebin ! videoconvert ! \
gvadetect model=/home/<YOUR_USERNAME>/intel/face-detection-adas-0001/FP32/face-detection-adas-0001.xml device=CPU ! \
gvawatermark ! videoconvert ! \
vp8enc ! webmmux ! \
filesink location=face_detected_output.webm
```

### 5.4 Alternative: Display on screen

```bash
gst-launch-1.0 filesrc location=/path/to/face-demographics-walking.mp4 ! \
decodebin ! videoconvert ! \
gvadetect model=/path/to/models/intel/face-detection-adas-0001/FP32/face-detection-adas-0001.xml device=CPU ! \
Comment thread
rpandya28 marked this conversation as resolved.
gvawatermark ! videoconvert ! \
autovideosink
```

---

## 6. Notes & Troubleshooting

- **Proxy cert errors (Docker pulls)**: import your corporate root CA into the OS trust store (for example, using `sudo apt-get install -y ca-certificates` and `sudo update-ca-certificates`) and into `/etc/docker/certs.d/<registry>/ca.crt`, then run `systemctl restart docker`.
- **No GVA plugins?** Ensure DL Streamer is installed and `GST_PLUGIN_PATH` is exported.
- **Headless systems**: prefer the file-output pipeline (WebM/MP4) instead of `autovideosink`.
- **Model path errors**: ensure `.xml` and `.bin` are co-located in the same `FP32`/`FP16` folder.
## Additional DL Streamer Applications & Examples
Comment thread
rpandya28 marked this conversation as resolved.
Outdated

For more DL Streamer (DLS) pipelines, advanced video analytics, multi-model graphs, and edge AI applications, refer to the official Open Edge Platform AI Libraries:

**https://github.com/open-edge-platform/edge-ai-libraries**

This repository contains:
- Ready‑to‑run DL Streamer pipelines
- Comprehensive model‑proc files
- Multi-stage pipelines (detect → track → classify → action recognition)
- Optimized GStreamer graphs for edge deployments
- Reusable components for real‑time video analytics
Comment on lines +205 to +209
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Multiple instances throughout the document use Unicode non-breaking hyphens (‑) instead of standard ASCII hyphens. This includes "Ready‑to‑run" (line 190), "Real‑time" (line 194), and others. Replace all Unicode hyphens with standard ASCII hyphens (-) for better compatibility, searchability, and consistency with standard markdown practices.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot open a new pull request to apply changes based on this feedback

- Integrations with OpenVINO, VA-API, and hardware accelerators

Use these examples to extend your application beyond basic face detection into:
- Person/vehicle tracking
- Object classification
- Action recognition
- Multi-camera pipelines
- Custom edge AI applications


---

## 7. License

This guide contains example commands and scripts provided for convenience. Review third‑party container/images licenses before redistribution.






Loading