|
1 | 1 | # How It Works |
2 | 2 |
|
3 | | -This section provides a high-level view of how the application integrates with a typical system architecture. |
| 3 | +This section provides a high-level view of how the application integrates with a |
| 4 | +typical system architecture. |
4 | 5 |
|
5 | 6 |  |
6 | 7 |
|
7 | 8 | ## Diagram Description |
8 | 9 |
|
9 | | -- **Inputs**: |
| 10 | +### Inputs |
10 | 11 |
|
11 | | - Video recordings are used to simulate a live feed from cameras deployed at a toll. |
12 | | - The application can be configured to work with live cameras. |
| 12 | +Video recordings are used to simulate a live feed from cameras deployed at a toll. |
| 13 | +The application can be configured to work with live cameras. |
13 | 14 |
|
14 | | - - **Video Files** - Tolling cameras that capture videos simultaneously from front, |
15 | | - rear and side profiles. |
16 | | - - **Scene Database** - Pre-configured intersection scene with satellite view of |
17 | | - tolling area, calibrated cameras and regions of interest. |
| 15 | +- **Video Files** - Tolling cameras that capture videos simultaneously from front, |
| 16 | + rear and side profiles. |
| 17 | +- **Scene Database** - Pre-configured intersection scene with satellite view of |
| 18 | + tolling area, calibrated cameras and regions of interest. |
18 | 19 |
|
19 | | -- **Processing**: |
| 20 | +### Core (Processing) |
20 | 21 |
|
21 | | - - **Video Analytics** - Deep Learning Streamer Pipeline Server |
22 | | - (DL Streamer Pipeline Server) utilizes a pre-trained object detection model |
23 | | - to generate object detection metadata and and a local NTP server for |
24 | | - synchronized timestamps. This metadata is published to the MQTT broker. |
25 | | - - **Sensor Fusion** - Scene Controller Microservice fuses the metadata from |
26 | | - video analytics utilizing scene data obtained through the Scene Management API. |
27 | | - It uses the fused tracks and the configured analytics (regions of interest) |
28 | | - to generate events that are published to the MQTT broker. |
29 | | - - **Aggregate Scene Analytics** - Region of interests analytics are read from |
30 | | - the MQTT broker and stored in an InfluxDB bucket that enables time series analysis through Flux queries. |
31 | | -- **Outputs**: |
32 | | - - Fused object tracks are available on the MQTT broker and visualized through the Scene Management UI. |
33 | | - - Aggregated toll analytics are visualized through a Grafana dashboard. |
| 22 | +- [**Video Analytics**](./perception-layer.md) - Deep Learning Streamer Pipeline Server |
| 23 | + (DL Streamer Pipeline Server) utilizes a pre-trained object detection model |
| 24 | + to [generate object detection metadata](#zero-copy-video-pipeline) and and a local |
| 25 | + NTP server for synchronized timestamps. This metadata is published to the MQTT broker. |
| 26 | +- **Sensor Fusion** - Scene Controller Microservice fuses the metadata from |
| 27 | + video analytics utilizing scene data obtained through the Scene Management API. |
| 28 | + It uses the fused tracks and the configured analytics (regions of interest) |
| 29 | + to generate events that are published to the MQTT broker. |
| 30 | +- [**Aggregate Scene Analytics**](#node-red-transformation) - Region of interests |
| 31 | + analytics are read from the MQTT broker and |
| 32 | + [stored in an InfluxDB bucket](#storage-influxdb) that enables time series |
| 33 | + analysis through Flux queries. |
34 | 34 |
|
35 | | -## Key Features |
| 35 | +### Live Feed Output |
36 | 36 |
|
37 | | -- **Feature 1**: Architecture based on modular microservices enables composability and reconfiguration. |
38 | | -- **Feature 2**: Optimized video pipelines for Intel edge devices. |
39 | | -- **Feature 3**: Scene-based analytics allow insights beyond single sensor views. |
| 37 | +- Fused object tracks are available on the MQTT broker and visualized through |
| 38 | + the Scene Management UI. |
| 39 | +- [Aggregated toll analytics](#analytics-pipeline-downstream) are visualized |
| 40 | + through a Grafana dashboard. |
| 41 | + |
| 42 | +### Workflow |
| 43 | + |
| 44 | +1. Video loops or RTSP is fed into DL Streamer. |
| 45 | +2. Trained AI models detect vehicles and license plates. |
| 46 | +3. Metadata is published to MQTT. |
| 47 | +4. SceneScape maps detections to scene regions to get exact location of objects on the scene. |
| 48 | +5. Exit events are generated when vehicles leave the region. |
| 49 | +6. Node-RED processes only finalized exit events by subscribing to SceneScape topics. |
| 50 | +7. Data is written to InfluxDB for system to access for consistent information. |
| 51 | +8. Grafana visualizes real time and historical data enabling access to metrics |
| 52 | + and vehicle details. |
| 53 | + |
| 54 | +## Optimizations |
| 55 | + |
| 56 | +The system achieves high-throughput processing on Edge hardware through specific |
| 57 | +optimizations defined in `config.json`. The [`docker-compose.yml`](./_assets/docker-compose.yml) |
| 58 | +file mentions all the services and the pipelines are configured in `config.json` file. |
| 59 | + |
| 60 | +### Zero-Copy Video Pipeline |
| 61 | + |
| 62 | +Unlike standard OpenCV pipelines that copy frames to CPU RAM, this solution utilizes **VASurface Sharing plugin**. |
| 63 | + |
| 64 | +- **Mechanism:** Decoded video frames remain in GPU memory (`video/x-raw(memory:VAMemory)`). |
| 65 | +- **Benefit:** Zero-copy inference eliminates PCIe bandwidth bottlenecks, reducing end-to-end latency by ~40%. |
| 66 | +- **Config Evidence:** `pre-process-backend=va-surface-sharing` used in all `gvadetect` elements. |
| 67 | + |
| 68 | +### Dynamic ROI Inference (Hierarchical Execution) |
| 69 | + |
| 70 | +To maximize efficiency, heavy neural networks (like Axle Counting) do not run on the full 4K frame. |
| 71 | + |
| 72 | +- **Logic:** The "Vehicle Type" model runs first to find the bounding box. |
| 73 | +- **Optimization:** The Axle model is configured with `inference-region=roi-list`, |
| 74 | + forcing it to execute *only* within the coordinates of the detected vehicle. |
| 75 | +- **Impact:** Reduces pixel processing load by >80% for sparse traffic scenes. |
| 76 | + |
| 77 | +### Hybrid Workload Distribution |
| 78 | + |
| 79 | +The pipeline intelligently maps models to available accelerators to prevent resource contention: |
| 80 | + |
| 81 | +- **GPU (Flex Series):** Handles heavy convolution tasks (Vehicle Detection, LPR, Axle Counting). |
| 82 | +- **CPU (Xeon):** Handles lighter classification tasks (Vehicle Color) and post-processing adapters (`gvapython`). |
| 83 | + |
| 84 | +## Analytics Pipeline (Downstream) |
| 85 | + |
| 86 | +Raw metadata is valuable, but actionable insights come from the Analytics Pipeline. |
| 87 | + |
| 88 | +## Node-RED Transformation |
| 89 | + |
| 90 | +- **Input:** The **MQTT IN Node** subscribes to `scenescape/event/region/+/+/objects`. |
| 91 | +- **Logic:** The **Function node** aggregates counts per region and calculates **Dwell Time** (congestion). |
| 92 | +- **Output:** The **InfluxDB OUT Node** writes normalized data points to InfluxDB. |
| 93 | + |
| 94 | + |
| 95 | + |
| 96 | +### Storage (InfluxDB) |
| 97 | + |
| 98 | +InfluxDB acts as a single source of truth. All critical and shared data is |
| 99 | +stored in one location, ensuring every user and system accesses the same, |
| 100 | +accurate and consistent information. |
| 101 | + |
| 102 | + |
| 103 | + |
| 104 | +### Visualization (Grafana) |
| 105 | + |
| 106 | +The system ships with a pre-configured dashboard (`anthem-intersection.json` schema) |
| 107 | +focusing on Traffic Volume, Flow Efficiency and Safety Alerts. |
| 108 | + |
| 109 | + |
40 | 110 |
|
41 | 111 | ## Learn More |
42 | 112 |
|
43 | | -- [System Requirements](./get-started/system-requirements.md): |
44 | | - Check the hardware and software requirements for deploying the application. |
45 | | -- [Get Started](./get-started.md): |
46 | | - Follow step-by-step instructions to set up the application. |
47 | | -- [Technical Reference](./how-it-works/technical-reference.md): Learn more about engineering specification and |
48 | | - how to use Zero-Copy Pipeline and API. |
49 | | -- [Support and Troubleshooting](./troubleshooting.md): |
50 | | - Find solutions to common issues and troubleshooting steps. |
| 113 | +- [System Requirements](./get-started/system-requirements.md) |
| 114 | +- [Get Started](./get-started.md) |
| 115 | +- [API Reference](./api-reference.md) |
| 116 | +- [Support and Troubleshooting](./troubleshooting.md) |
51 | 117 |
|
52 | 118 | <!--hide_directive |
53 | 119 | :::{toctree} |
54 | 120 | :hidden: |
55 | 121 |
|
56 | | -./how-it-works/technical-reference |
| 122 | +./perception-layer |
57 | 123 |
|
58 | 124 | ::: |
59 | 125 | hide_directive--> |
0 commit comments