Replies: 1 comment
-
|
That is a great way of seeing different Open Edge Platform ingredients working together. Thanks for putting this together, @vgygce19 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
This discussion covers the key concepts demonstrated in the Simplifying AI Video Search & Summarization Setup with Docker Compose Video. The demo walks through the full end-to-end workflow of deploying an AI-powered Video summarization service on an Intel® Arrow Lake Core Ultra edge platform using the Edge AI Libraries and Edge Microvisor Toolkit (EMT) image.
🎥 Video Demo: https://youtu.be/yikAk5XU96s
The demo presents a complete edge-ready GenAI video summarization pipeline, including:
This application represents a turnkey pattern for running video-centric AI workloads at the edge and addresses common challenges in edge GenAI deployments:
✅ Challenge 1: Running VLMs and LLMs efficiently on edge hardware
➡️ Solution: INT8 quantization + OpenVINO + Optimum Intel accelerate inference while minimizing resource usage across CPU, GPU, and NPU.
✅ Challenge 2: Deploying complex multimodal GenAI pipelines
➡️ Solution: Microservices + Docker Compose simplify packaging, scaling, and operations for video summarization, object detection, and embeddings.
✅ Challenge 3: Handling large video files and enabling fast summarization
➡️ Solution: Chunk-based video processing with configurable frame intervals and ingestion settings ensures efficient analysis and summary generation.
✅ Challenge 4: Enabling non-technical users to access AI capabilities
➡️ Solution: Clean, simple, web-based UI for video uploads, pipeline configuration, and summary visualization.
✅ Challenge 5: Managing model execution reliably on edge nodes
➡️ Solution: EMT image + controlled container environment ensures repeatable deployments with environment variables for registry, credentials, and model configs.
Key Environment Variables Used
Registry & Credentials:
REGISTRY_URL, TAG, MINIO_ROOT_USER, MINIO_ROOT_PASSWORD, POSTGRES_USER, POSTGRES_PASSWORD, RABBITMQ_USER, RABBITMQ_PASSWORD
Models:
Pipeline Config:
VS_WATCHER_DIR, FRAME_INTERVAL=15, OV_CONFIG='{"PERFORMANCE_HINT": "THROUGHPUT"}'
Key Workflow Steps
1. Preparing the Environment
2. Installing Dependencies
The setup script installs:
3. Pipeline Deployment
4. Video Processing and Summarization
5. Chunk-Based Analysis
Main Takeaways
Simple workflow: Upload → Configure → Summarize in seconds
👉 Learn more about the Simplifying AI Video Search & Summarization Setup with Docker Compose and Edge Microvisor Toolkit (EMT)
👉 Visit the Open Edge Platform Playlist for more demos
Beta Was this translation helpful? Give feedback.
All reactions