-
Notifications
You must be signed in to change notification settings - Fork 13
Add AI app user guide #420
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from 17 commits
035a179
7ef0c09
1341f4c
0bca352
ec3c73f
1c9383f
e58b9a2
6df96e8
bd01fac
8af3ae3
b428332
b74f8d9
0248ef5
e70741c
1996147
e19a198
494adf0
ecb43fe
c41d35c
57568f9
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
| @@ -0,0 +1,224 @@ | ||||||
| # Example Workload: Docker‑Based Chat Q&A, Face Recognition & DL Streamer Face Detection | ||||||
|
|
||||||
|
rpandya28 marked this conversation as resolved.
|
||||||
| *An example guide for running Docker-based AI workloads (Ollama chat, face recognition, DL Streamer face detection) on a Linux image created with OS Image Composer.* | ||||||
|
Comment on lines
+1
to
+3
|
||||||
|
|
||||||
| This tutorial assumes you have already built a base OS image using **OS Image Composer** and want to validate it or extend it with containerized edge‑AI workloads. For full details of the AI applications themselves (models, pipelines, etc.), refer to the corresponding guides in the `edge-ai-libraries` repository; this document focuses on how to deploy and run them on your composed image, including typical proxy and Docker configuration steps. | ||||||
| --- | ||||||
|
|
||||||
| ## 1. Prerequisites | ||||||
| - Add the DL Streamer and Docker-based packages in the OS Image Composer build configuration using its [Multiple Package Repository Support](../architecture/os-image-composer-multi-repo-support.md) feature | ||||||
|
|
||||||
| After the image is built and the system is booted, verify the runtime environment. | ||||||
| First, confirm that Docker Engine is installed and running. | ||||||
| Next, confirm that Intel DL Streamer is installed by checking that the directory '/opt/intel/dlstreamer' exists. | ||||||
|
|
||||||
| --- | ||||||
|
|
||||||
| ## 2. Proxy Configuration (Generic Templates) | ||||||
|
|
||||||
| > Replace placeholders with your organization values: | ||||||
| > | ||||||
| > - `<HTTP_PROXY_URL>` — e.g., `http://proxy.example.com:8080` | ||||||
| > - `<HTTPS_PROXY_URL>` — e.g., `http://proxy.example.com:8443` | ||||||
| > - `<NO_PROXY_LIST>` — e.g., `localhost,127.0.0.1,::1,*.example.com,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,/var/run/docker.sock` | ||||||
|
|
||||||
| ### 2.1 Configure Docker Engine (systemd) | ||||||
|
rpandya28 marked this conversation as resolved.
|
||||||
|
|
||||||
| ```bash | ||||||
| sudo mkdir -p /etc/systemd/system/docker.service.d | ||||||
|
|
||||||
| sudo tee /etc/systemd/system/docker.service.d/http-proxy.conf <<'EOF' | ||||||
| [Service] | ||||||
| Environment="HTTP_PROXY=<HTTP_PROXY_URL>" | ||||||
| Environment="HTTPS_PROXY=<HTTPS_PROXY_URL>" | ||||||
| Environment="NO_PROXY=<NO_PROXY_LIST>" | ||||||
| EOF | ||||||
|
|
||||||
| sudo systemctl daemon-reload | ||||||
| sudo systemctl restart docker | ||||||
| ``` | ||||||
|
|
||||||
| ### 2.2 Configure Docker CLI (`~/.docker/config.json`) | ||||||
|
rpandya28 marked this conversation as resolved.
|
||||||
|
|
||||||
| ```bash | ||||||
| mkdir -p ~/.docker | ||||||
|
|
||||||
| tee ~/.docker/config.json <<'EOF' | ||||||
| { | ||||||
| "proxies": { | ||||||
| "default": { | ||||||
| "httpProxy": "<HTTP_PROXY_URL>", | ||||||
| "httpsProxy": "<HTTPS_PROXY_URL>", | ||||||
| "noProxy": "<NO_PROXY_LIST>" | ||||||
| } | ||||||
| } | ||||||
| } | ||||||
| EOF | ||||||
| ``` | ||||||
|
|
||||||
| --- | ||||||
|
|
||||||
| ## 3. Chat Q&A with Ollama (Docker) | ||||||
|
|
||||||
| ### 3.1 Start the container | ||||||
|
|
||||||
| ```bash | ||||||
| sudo docker run -d --name ollama \ | ||||||
| --mount source=ollama-data,target=/root/.ollama \ | ||||||
| --memory="4g" --cpus="1" \ | ||||||
| -e HTTP_PROXY="<HTTP_PROXY_URL>" \ | ||||||
| -e HTTPS_PROXY="<HTTPS_PROXY_URL>" \ | ||||||
| -e NO_PROXY="<NO_PROXY_LIST>" \ | ||||||
| ollama/ollama | ||||||
|
||||||
| ollama/ollama | |
| ollama/ollama:0.3.14 # Example validated version; update tag as needed |
Copilot
AI
Mar 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similarly, the face recognition example uses an unpinned third-party image ("aaftio/face_recognition") with no tag/digest. Pinning the image (and optionally linking to the upstream repo or noting the expected image provenance) would make the tutorial more reproducible and reduce the risk of pulling an unexpected image.
| sudo docker run -it aaftio/face_recognition /bin/bash | |
| sudo docker run -it aaftio/face_recognition:latest /bin/bash |
Copilot
AI
Mar 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The heading contains an en dash ("–") and trailing whitespace. For better consistency and terminal/markdown compatibility, prefer a normal hyphen "-" and remove the trailing space.
| ## 5. DL Streamer – Face Detection Pipeline | |
| ## 5. DL Streamer - Face Detection Pipeline |
Copilot
AI
Mar 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The model directory/path guidance is inconsistent: you export MODELS_PATH as "/home/${USER}/intel/models", but the next section says OMZ outputs to "~/intel/face-detection-adas-0001/...". Consider either (1) setting omz_downloader --output_dir "$MODELS_PATH" and updating the example paths accordingly, or (2) updating MODELS_PATH to match the documented output location.
Copilot
AI
Feb 17, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Multiple instances throughout the document use Unicode non-breaking hyphens (‑) instead of standard ASCII hyphens. This includes "Ready‑to‑run" (line 190), "Real‑time" (line 194), and others. Replace all Unicode hyphens with standard ASCII hyphens (-) for better compatibility, searchability, and consistency with standard markdown practices.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@copilot open a new pull request to apply changes based on this feedback
Uh oh!
There was an error while loading. Please reload this page.