Conversation
There was a problem hiding this comment.
Pull request overview
This PR adds a comprehensive user guide for Docker-based AI applications including Ollama chat Q&A, face recognition, and Intel DL Streamer face detection. However, this content appears to be out of scope for the OS Image Composer repository, which is specifically focused on building custom Linux images from pre-built packages.
Changes:
- Adds new tutorial documentation for running Docker-based AI applications
- Includes proxy configuration templates for Docker environments
- Provides examples for Ollama LLM chat, face recognition containers, and DL Streamer pipelines
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| @@ -0,0 +1,209 @@ | |||
|
|
|||
| # Docker‑Based Chat Q&A, Face Recognition & DL Streamer Face Detection | |||
There was a problem hiding this comment.
The "How Has This Been Tested?" section in the PR description is empty but marked as required. Documentation changes should be tested by verifying that the commands work correctly and that the documentation renders properly.
| - Ready‑to‑run DL Streamer pipelines | ||
| - Comprehensive model‑proc files | ||
| - Multi-stage pipelines (detect → track → classify → action recognition) | ||
| - Optimized GStreamer graphs for edge deployments | ||
| - Reusable components for real‑time video analytics |
There was a problem hiding this comment.
Multiple instances throughout the document use Unicode non-breaking hyphens (‑) instead of standard ASCII hyphens. This includes "Ready‑to‑run" (line 190), "Real‑time" (line 194), and others. Replace all Unicode hyphens with standard ASCII hyphens (-) for better compatibility, searchability, and consistency with standard markdown practices.
There was a problem hiding this comment.
@copilot open a new pull request to apply changes based on this feedback
|
|
||
| ## 1. Prerequisites | ||
|
|
||
| - OS with Docker Engine installed and running |
There was a problem hiding this comment.
I would point, embed, a hyperlink to the official docker installation page
There was a problem hiding this comment.
we are adding required packages in yaml file. I have added this as part of pre.req.
|
|
||
| - OS with Docker Engine installed and running | ||
| - (Optional) Corporate proxy details if you are behind a proxy | ||
| - For DL Streamer section: Intel® DL Streamer installed under `/opt/intel/dlstreamer/` |
There was a problem hiding this comment.
Link to install guide for this step?
There was a problem hiding this comment.
We are not installing externally adding packages in yaml and building images
There was a problem hiding this comment.
Sorry, my comment was on the prerequisites; you're stating that dlstreamer should be installed under /opt/intel/dlstreamer. Link to documentation for the user to install these? It may not be intuitive.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
|
@rpandya28 several comments to resolve on this PR until we can merge, can you please have a look and disposition these? |
wiwaszko-intel
left a comment
There was a problem hiding this comment.
Just one small issue with the name (OS Image Composer is the approved name). Consider the additional suggestions for links.
Other than that, LGTM.
| - Add DL Streamer and Docker based packages as part of os image composer build process using its multi repo feature | ||
| - OS with Docker Engine installed via and running | ||
| - (Optional) Corporate proxy details if you are behind a proxy | ||
| - For DL Streamer section: Intel® DL Streamer installed under `/opt/intel/dlstreamer/` |
There was a problem hiding this comment.
| - For DL Streamer section: Intel® DL Streamer installed under `/opt/intel/dlstreamer/` | |
| - For DL Streamer section: Intel® DL Streamer installed under `/opt/intel/dlstreamer/`. For information on installing DL Streamer, see its [documentation](https://docs.openedgeplatform.intel.com/2026.0/edge-ai-libraries/dlstreamer/get_started/install/install_guide_index.html) |
wiwaszko-intel
left a comment
There was a problem hiding this comment.
See the suggestions and my other comment.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Wiktor Iwaszko <wiktorx.iwaszko@intel.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
|
|
||
| --- | ||
|
|
||
| ## 5. DL Streamer – Face Detection Pipeline |
There was a problem hiding this comment.
The heading contains an en dash ("–") and trailing whitespace. For better consistency and terminal/markdown compatibility, prefer a normal hyphen "-" and remove the trailing space.
| ## 5. DL Streamer – Face Detection Pipeline | |
| ## 5. DL Streamer - Face Detection Pipeline |
| -e HTTP_PROXY="<HTTP_PROXY_URL>" \ | ||
| -e HTTPS_PROXY="<HTTPS_PROXY_URL>" \ | ||
| -e NO_PROXY="<NO_PROXY_LIST>" \ | ||
| ollama/ollama |
There was a problem hiding this comment.
The Docker images in the examples are referenced without an explicit tag or digest (e.g., "ollama/ollama"), which can lead to non-reproducible behavior and increases supply-chain risk if the "latest" tag changes. Prefer pinning to a specific version tag (or digest) and/or add a brief note explaining which version was validated.
| ollama/ollama | |
| ollama/ollama:0.3.14 # Example validated version; update tag as needed |
| ### 4.1 Run the container and enter shell | ||
|
|
||
| ```bash | ||
| sudo docker run -it aaftio/face_recognition /bin/bash |
There was a problem hiding this comment.
Similarly, the face recognition example uses an unpinned third-party image ("aaftio/face_recognition") with no tag/digest. Pinning the image (and optionally linking to the upstream repo or noting the expected image provenance) would make the tutorial more reproducible and reduce the risk of pulling an unexpected image.
| sudo docker run -it aaftio/face_recognition /bin/bash | |
| sudo docker run -it aaftio/face_recognition:latest /bin/bash |
| # Example Workload: Docker‑Based Chat Q&A, Face Recognition & DL Streamer Face Detection | ||
|
|
||
| *An example guide for running Docker-based AI workloads (Ollama chat, face recognition, DL Streamer face detection) on a Linux image created with OS Image Composer.* |
There was a problem hiding this comment.
The tutorial filename uses underscores and mixed casing ("user_guide_dlstreamer_docker_based_AI_app.md"), while other files under docs/tutorial use kebab-case (e.g., "usage-guide.md", "configure-secure-boot.md"). Consider renaming this file to kebab-case for consistency and easier linking/discoverability (e.g., "user-guide-dlstreamer-docker-based-ai-app.md").
| export PATH=/opt/intel/dlstreamer/gstreamer/bin:/opt/intel/dlstreamer/bin:$PATH | ||
|
|
||
| export MODELS_PATH=/home/${USER}/intel/models | ||
| ``` | ||
|
|
||
| Verify plugins: | ||
| ```bash | ||
| gst-inspect-1.0 | grep -E "gvadetect|gvawatermark|gvatrack|gvaclassify" | ||
| ``` | ||
|
|
||
| ### 5.2 Download the model (OMZ tools) | ||
|
|
||
| > If you don’t have OMZ tools: `python3 -m pip install openvino-dev` (use a venv if your distro enforces PEP 668). | ||
|
|
||
| ```bash | ||
| omz_downloader --name face-detection-adas-0001 | ||
| ``` | ||
|
|
||
| The IR will be available at (example): | ||
| ``` | ||
| ~/intel/face-detection-adas-0001/FP32/face-detection-adas-0001.xml | ||
| ``` |
There was a problem hiding this comment.
The model directory/path guidance is inconsistent: you export MODELS_PATH as "/home/${USER}/intel/models", but the next section says OMZ outputs to "~/intel/face-detection-adas-0001/...". Consider either (1) setting omz_downloader --output_dir "$MODELS_PATH" and updating the example paths accordingly, or (2) updating MODELS_PATH to match the documented output location.
Merge Checklist
All boxes should be checked before merging the PR
Description
Any Newly Introduced Dependencies
How Has This Been Tested?