Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,3 +1,14 @@
<!--hide_directive
<div class="component_card_widget">
<a class="icon_github" href="https://github.com/open-edge-platform/edge-ai-suites/tree/main/metro-ai-suite/deterministic-threat-detection">
GitHub project
</a>
<a class="icon_document" href="https://github.com/open-edge-platform/edge-ai-suites/blob/main/metro-ai-suite/deterministic-threat-detection/README.md">
Readme
</a>
</div>
hide_directive-->

# Deterministic Threat Detection

Welcome to the documentation for the Deterministic Threat Detection project. This guide provides all the information you need to understand, set up, and run this Time-Sensitive Networking (TSN) sample application.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,18 @@
<!--hide_directive
<div class="component_card_widget">
<a class="icon_github" href="https://github.com/open-edge-platform/edge-ai-suites/tree/main/metro-ai-suite/live-video-analysis/live-video-alert-agent">
GitHub project
</a>
<a class="icon_document" href="https://github.com/open-edge-platform/edge-ai-suites/tree/main/metro-ai-suite/live-video-analysis/live-video-alert-agent/README.md">
Readme
</a>
</div>
hide_directive-->

# Live Video Alert Agent

Deploy AI-powered video alerting using OpenVINO Vision Language Models. You process RTSP streams, generate real-time alerts from natural language prompts, and monitor them on a unified dashboard.
Deploy AI-powered video alerting using OpenVINO Vision Language Models to process RTSP streams,
generate real-time alerts from natural language prompts, and monitor them on a unified dashboard.

## Use Cases

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,10 +42,10 @@ ALERT_MODE=True # Enable Alert Mode

When Alert Mode is enabled:

| Response | Visual Style |
|----------|--------------|
| **Yes** | Red/Alert highlighting indicating a positive detection |
| **No** | Green/Normal highlighting indicating no detection |
| Response | Visual Style |
| -------- | ------------------------------------------------------ |
| **Yes** | Red/Alert highlighting indicating a positive detection |
| **No** | Green/Normal highlighting indicating no detection |

## Custom Prompts

Expand All @@ -64,10 +64,12 @@ Example prompts for different scenarios:

1. Verify the `ALERT_MODE` environment variable is set correctly in your `.env` file
2. Ensure Docker Compose picks up the environment variable:

```bash
docker compose down
docker compose up
```

3. Check the application title - it should display "Live Video Captioning and Alerts"

### Alert Styling Not Appearing
Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,19 @@
<!--hide_directive
<div class="component_card_widget">
<a class="icon_github" href="https://github.com/open-edge-platform/edge-ai-suites/tree/main/metro-ai-suite/live-video-analysis/live-video-captioning">
GitHub project
</a>
<a class="icon_document" href="https://github.com/open-edge-platform/edge-ai-suites/blob/main/metro-ai-suite/live-video-analysis/live-video-captioning/README.md">
Readme
</a>
</div>
hide_directive-->

# Live Video Captioning

**Live Video Captioning** deploys AI-powered captioning for live video streams with Deep Learning Streamer (DL Streamer) and OpenVINO™ Vision Language Models. You can process RTSP streams, generate real-time captions, and monitor performance metrics on a dashboard.

The key features are as follows:
The key features are:

**Multi-Model Support**: Switch between VLMs (InternVL2, Gemma-3, etc.) with automatic model discovery from `ov_models/`.

Expand Down Expand Up @@ -38,6 +49,5 @@ api-reference
known-issues
release-notes

Source Code <https://github.com/open-edge-platform/edge-ai-suites/blob/main/metro-ai-suite/live-video-analysis/live-video-captioning/docs/user-guide>
:::
hide_directive-->
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,9 @@
</div>
hide_directive-->


**Live Video Search** is a Metro AI Suite sample that adapts the VSS pipeline for semantic search on live Frigate streams. The application ingests live camera streams, indexes video segments with embeddings and timestamped camera metadata, and enables you to select cameras, time ranges, and free-text queries. You can retrieve ranked, playable clips with confidence scores and view live system metrics.


## What It Enables
## Key Features

- **Live semantic search** over active camera streams.
- **Time‑range filtering** from either the UI or query parsing (for example, “person seen in last 5 minutes”).
Expand All @@ -37,7 +35,7 @@ Live Video Search combines two existing stacks:
- VSS UI for semantic queries and clip playback.
- See VSS docs: [Video Search and Summarization Docs](https://github.com/open-edge-platform/edge-ai-libraries/blob/main/sample-applications/video-search-and-summarization/docs/user-guide/index.md)

## When to Use
## Use Cases

- **Operations teams** who need to locate recent events across multiple cameras quickly.
- **Edge deployments** where bandwidth or latency constraints prevent cloud‑first analytics.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,8 +39,8 @@ for Intel edge devices.

The system uses the **Metro Edge Architecture** based on three key principles:

- **Perception**: Deep Learning Streamer (DLStreamer) [processes 3/4 camera feeds](./docs/user-guide/how-it-works/perception-layer.md).
- **Control**: SceneScape Controller [aggregates metadata](./docs/user-guide/how-it-works.md#analytics-pipeline-downstream).
- **Perception**: Deep Learning Streamer (DL Streamer) [processes 3/4 camera feeds](./docs/user-guide/how-it-works/perception-layer.md).
- **Control**: Intel® SceneScape Controller [aggregates metadata](./docs/user-guide/how-it-works.md#analytics-pipeline-downstream).
- **Analytics**: Node-RED [transforms events into traffic insights](./docs/user-guide/how-it-works/analytics-pipeline.md#node-red-transformation)
(Traffic Volume, Flow Efficiency, Tariffing).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ docker compose ps

## Accessing the Application

- **Web UI (Scenescape Configuration):** `https://localhost`
- **Web UI (Intel® SceneScape Configuration):** `https://localhost`
- **Grafana (Dashboard):** `http://localhost:3000` (Default Login: `admin`/`admin`)

### User Interface
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,9 +42,9 @@ The application can be configured to work with live cameras.
1. Video loops or RTSP is fed into DL Streamer.
2. Trained AI models detect vehicles and license plates.
3. Metadata is published to MQTT.
4. SceneScape maps detections to scene regions to get exact location of objects on the scene.
4. Intel® SceneScape maps detections to scene regions to get exact location of objects on the scene.
5. Exit events are generated when vehicles leave the region.
6. Node-RED processes only finalized exit events by subscribing to SceneScape topics.
6. Node-RED processes only finalized exit events by subscribing to Intel® SceneScape topics.
7. Data is written to InfluxDB for system to access for consistent information.
8. Grafana visualizes real time and historical data enabling access to metrics
and vehicle details.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,14 @@
<!--hide_directive
<div class="component_card_widget">
<a class="icon_github" href="https://github.com/open-edge-platform/edge-ai-suites/tree/main/metro-ai-suite/metro-vision-ai-app-recipe/smart-tolling">
GitHub project
</a>
<a class="icon_document" href="https://github.com/open-edge-platform/edge-ai-suites/blob/main/metro-ai-suite/metro-vision-ai-app-recipe/smart-tolling/README.md">
Readme
</a>
</div>
hide_directive-->

# Smart Tolling Application

The **Metro Smart Tolling Application** is a high-precision Edge AI solution
Expand Down Expand Up @@ -44,8 +55,8 @@ for Intel edge devices.

The system uses the **Metro Edge Architecture** based on three key layers:

- **Perception**: Deep Learning Streamer (DLStreamer) [processes 3/4 camera feeds](./how-it-works/perception-layer.md).
- **Control**: SceneScape Controller [aggregates metadata](./how-it-works/analytics-pipeline.md).
- **Perception**: Deep Learning Streamer (DL Streamer) [processes 3/4 camera feeds](./how-it-works/perception-layer.md).
- **Control**: Intel® SceneScape Controller [aggregates metadata](./how-it-works/analytics-pipeline.md).
- **Analytics**: Node-RED [transforms events into traffic insights](./how-it-works/analytics-pipeline.md#node-red-transformation)
(Traffic Volume, Flow Efficiency, Tariffing).

Expand Down
16 changes: 8 additions & 8 deletions metro-ai-suite/smart-nvr/docs/user-guide/get-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,12 +17,12 @@ from your video data.
Smart NVR operates in a distributed architecture requiring multiple services across 3-4
devices for optimal performance:

| Device | Service | Purpose |
|--------|---------|---------|
| Device 1 | VSS Search | Video search functionality |
| Device 2 | VSS Summary | Video summarization |
| Device 3 | VLM Microservice | AI-powered event descriptions (optional) |
| Device 3/4 | Smart NVR App | Main application interface |
| Device | Service | Purpose |
| ---------- | ---------------- | ---------------------------------------- |
| Device 1 | VSS Search | Video search functionality |
| Device 2 | VSS Summary | Video summarization |
| Device 3 | VLM Microservice | AI-powered event descriptions (optional) |
| Device 3/4 | Smart NVR App | Main application interface |

### Software Dependencies

Expand Down Expand Up @@ -144,8 +144,8 @@ Re-run the application after [configuring](#step-2-configure-environment) the re
> - This feature is experimental and may be unstable due to underlying Frigate GenAI implementation.
> - Requires VLM microservice to be running.
> - Disabled by default for system stability.
> - SmartNVR uses either Frigate or Scenescape for GenAI capabilities.
> GenAI in both cannot be enabled at the same time. If Scenescape is enabled,
> - SmartNVR uses either Frigate or Intel® SceneScape for GenAI capabilities.
> GenAI in both cannot be enabled at the same time. If Intel® SceneScape is enabled,
> its capabilities are prioritized over Frigate, with Frigate used in "dumb" mode.
> - If NVR_SCENESCAPE=true. then NVR_GENAI must be set to false. Otherwise, an error is thrown.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ docker logs nvr-event-router -f

![SceneScape Enabled Interface](./_assets/Scenescape_enabled.png)

When Intel® SceneScape is enabled (`NVR_SCENESCAPE=true`) and scenescape source is selected:
When Intel® SceneScape is enabled (`NVR_SCENESCAPE=true`) and **"scenescape"** source is selected:

- Source dropdown shows both **"frigate"** and **"scenescape"** options
- **Count** field becomes visible and editable
Expand Down
8 changes: 4 additions & 4 deletions metro-ai-suite/smart-route-planning-agent/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,10 @@ The Smart Route Planning Agent analyzes the route between a given source and des

## Get Started

- [Get Started](./docs/user-guide/get-started.md): Step-by-step guide to get started with the agent.
- [System Requirements](./docs/user-guide/get-started/system-requirements.md): Hardware and software requirements for running the agent.
- [Build from Source](./docs/user-guide/get-started/build-from-source.md): Instructions for building the agent from source code.
- [Environment Variables](./docs/user-guide/get-started/environment-variables.md): Configure the microservice through environment variables.
- [Get Started](./docs/user-guide/get-started.md): Step-by-step guide to get started with the agent.
- [System Requirements](./docs/user-guide/get-started/system-requirements.md): Hardware and software requirements for running the agent.
- [Build from Source](./docs/user-guide/get-started/build-from-source.md): Instructions for building the agent from source code.
- [Environment Variables](./docs/user-guide/get-started/environment-variables.md): Configure the microservice through environment variables.

## Learn More

Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,14 @@
<!--hide_directive
<div class="component_card_widget">
<a class="icon_github" href="https://github.com/open-edge-platform/edge-ai-suites/tree/main/metro-ai-suite/smart-route-planning-agent">
GitHub project
</a>
<a class="icon_document" href="https://github.com/open-edge-platform/edge-ai-suites/blob/main/metro-ai-suite/smart-route-planning-agent/README.md">
Readme
</a>
</div>
hide_directive-->

# Smart Route Planning Agent

The Smart Route Planning Agent is an AI-powered route optimization agent that uses multi-agent
Expand All @@ -17,7 +28,7 @@ to gather live analysis reports for informed routing decisions.

## How It Works

The agent receives source and destination inputs, finds the shortest route from available
The agent receives source and destination inputs, finds the shortest route from the available
routes, queries traffic intersection agents for live reports, and determines the optimal route.

![System Architecture Diagram](./_assets/ITS_architecture.png)
Expand Down
13 changes: 8 additions & 5 deletions metro-ai-suite/smart-traffic-intersection-agent/README.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,17 @@
## Smart Traffic Intersection Agent
# Smart Traffic Intersection Agent

The Smart Traffic Intersection Agent application analyzes various traffic scenarios at the intersection by giving driving suggestions, sending alerts, and providing an interface for other agents to plug in and get the required information about a particular traffic intersection. These deployments only happen at the edge, at each traffic intersection.

## Get Started
- [Get Started](./docs/user-guide/get-started.md): Step-by-step guide to get started with the agent.
- [System Requirements](./docs/user-guide/get-started/system-requirements.md): Hardware and software requirements for running the agent.

- [Get Started](./docs/user-guide/get-started.md): Step-by-step guide to get started with the agent.
- [System Requirements](./docs/user-guide/get-started/system-requirements.md): Hardware and software requirements for running the agent.

## How It Works
- [Overview](./docs/user-guide/index.md): A high-level introduction to the agent.
- [Build from Source](./docs/user-guide/get-started/build-from-source.md): Instructions for building the agent from source code.

- [Overview](./docs/user-guide/index.md): A high-level introduction to the agent.
- [Build from Source](./docs/user-guide/get-started/build-from-source.md): Instructions for building the agent from source code.

## Learn More

- [Release Notes](./docs/user-guide/release-notes.md): Information on the latest updates, improvements, and bug fixes.
Original file line number Diff line number Diff line change
@@ -1,3 +1,14 @@
<!--hide_directive
<div class="component_card_widget">
<a class="icon_github" href="https://github.com/open-edge-platform/edge-ai-suites/tree/main/metro-ai-suite/smart-traffic-intersection-agent">
GitHub project
</a>
<a class="icon_document" href="https://github.com/open-edge-platform/edge-ai-suites/blob/main/metro-ai-suite/smart-traffic-intersection-agent/README.md">
Readme
</a>
</div>
hide_directive-->

# Smart Traffic Intersection Agent

The Smart Traffic Intersection Agent is a comprehensive traffic analysis service that provides
Expand Down Expand Up @@ -25,9 +36,9 @@ The Smart Traffic Intersection stack includes the following containerized servic

- **MQTT Broker** (Eclipse Mosquitto message broker) - Message broker for traffic data
- **DL Streamer Pipeline Server** - Video analytics and AI inference
- **SceneScape Database** - Configuration and metadata storage
- **SceneScape Web Server** - Management interface
- **SceneScape Controller** - Orchestration service
- **Intel® SceneScape Database** - Configuration and metadata storage
- **Intel® SceneScape Web Server** - Management interface
- **Intel® SceneScape Controller** - Orchestration service
- **VLM OpenVINO Serving** - Vision Language Model inference
- **Traffic Intelligence** - Real-time traffic analysis with dual interface (API and UI)

Expand Down