In this document we present an Intel® software reference implementation (hereinafter abbreviated as SW RI) of Metro AI Suite Sensor Fusion for Traffic Management, which is integrated sensor fusion of camera and mmWave radar (a.k.a. ISF "C+R" or AIO "C+R"). The detailed steps of running this SW RI on NEPRA base platform are also described.
The internal project code name is "Garnet Park".
As shown in Fig.1, the E2E pipeline of this SW RI includes the following major blocks (workloads):
-
Dataset loading and data format conversion
-
Radar signal processing
-
Video analytics
-
Data fusion
-
Visualization
All the above workloads of this SW RI can run on single Intel SoC processor which provides all the required heterogeneous computing capabilities. To maximize its performance on Intel processors, we optimized this SW RI using Intel SW tool kits in addition to open-source SW libraries.
(1) Use case#1: 1C+1R (1) Use case#2: 4C+4R Figure 1. E2E SW pipelines of 2 use cases of sensor fusion C+R(Camera+Radar).-
Intel® Distribution of OpenVINO™ Toolkit
- Version: 2024.6
-
RADDet dataset
-
Platform
-
Intel® Celeron® Processor 7305E (1C+1R/2C+1R usecase)
-
Intel® Core™ Ultra 7 Processor 165H (4C+4R usecase)
-
13th Gen Intel(R) Core(TM) i7-13700 (16C+4R usecase)
-
-
AI Inference Service:
-
Media Processing (Camera)
-
Radar Processing (mmWave Radar)
-
Sensor Fusion
-
-
Demo Application
AI Inference Service is based on the HVA pipeline framework. In this SW RI, it includes the functions of DL inference, radar signal processing, and data fusion.
AI Inference Service exposes both RESTful API and gRPC API to clients, so that a pipeline defined and requested by a client can be run within this service.
-
RESTful API: listens to port 50051
-
gRPC API: listens to port 50052
vim $PROJ_DIR/ai_inference/source/low_latency_server/AiInference.config
...
[HTTP]
address=0.0.0.0
RESTfulPort=50051
gRPCPort=50052Currently we support four display types: media, radar, media_radar, media_fusion.
- Perform a fresh installation of Ubuntu* Desktop 24.04 on the target system.
- Configure your proxy
export http_proxy=<Your-Proxy> export https_proxy=<Your-Proxy>
| Setting | Step |
|---|---|
| Enable the Hidden BIOS Setting in Seavo Platform | "Right Shift+F7" Then Change Enabled Debug Setup Menu from [Enabled] to [Disable] |
| Disable VT-d in BIOS | Intel Advanced Menu → System Agent (SA) Configuration → VT-d setup menu → VT-d Note: If VT-d can’t be disabled, please disable Intel Advanced Menu → CPU Configuration → X2APIC |
| Disable SAGV in BIOS | Intel Advanced Menu → [System Agent (SA) Configuration] → Memory configuration → SAGV |
| Enable NPU Device | Intel Advanced Menu → CPU Configuration → Active SOC-North Efficient-cores Intel Advanced Menu → System Agent (SA) Configuration → NPU Device |
| TDP Configuration | SOC TDP configuration is very important for performance. Suggestion: TDP = 45W. For extreme heavy workload, TDP = 64W ---TDP = 45W settings: Intel Advanced → Power & Performance → CPU - Power Management Control → Config TDP Configurations → Power Limit 1 <45000> ---TDP = 64W settings: Intel Advanced → Power & Performance → CPU - Power Management Control → Config TDP Configurations → Configurable TDP Boot Mode [Level2] |
| Setting | Step |
|---|---|
| Enable ResizeBar in BIOS | Intel Advanced Menu -> System Agent (SA) Configuration -> PCI Express Configuration -> PCIE Resizable BAR Support |
-
install driver related libs
Update kernel, install GPU and NPU(MTL only) driver.
bash install_driver_related_libs.sh
Note that this step may restart the machine several times. Please rerun this script after each restart until you see the output of
All driver libs installed successfully. -
install project related libs
Install Boost, Spdlog, Thrift, MKL, OpenVINO, GRPC, Level Zero, oneVPL etc.
bash install_project_related_libs.sh
-
set $PROJ_DIR
cd Metro_AI_Suite_Sensor_Fusion_for_Traffic_Management_metro/sensor_fusion_service export PROJ_DIR=$PWD
-
prepare global radar configs in folder: /opt/datasets
sudo ln -s $PROJ_DIR/ai_inference/deployment/datasets /opt/datasets -
prepare models in folder: /opt/models
sudo ln -s $PROJ_DIR/ai_inference/deployment/models /opt/models -
prepare offline radar results for 4C4R/16C4R:
sudo cp $PROJ_DIR/ai_inference/deployment/datasets/radarResults.csv /opt -
build project
bash -x build.sh
For how to get RADDet dataset, please refer to this guide: How To Get RADDET Dataset section
Upon success, bin files will be extracted, save to $DATASET_ROOT/bin_files_{VERSION}:
NOTE: latest converted dataset version should be: v1.0
In this section, we describe how to run Intel® Metro AI Suite Sensor Fusion for Traffic Management application.
Intel® Metro AI Suite Sensor Fusion for Traffic Management application can support different pipeline using topology JSON files to describe the pipeline topology. The defined pipeline topology can be found at sec 5.1 Resources Summary
There are two steps required for running the sensor fusion application:
- Start AI Inference service, more details can be found at sec 5.2 Start Service
- Run the application entry program, more details can be found at sec 5.3 Run Entry Program
Besides, users can test each component (without display) following the guides at sec 5.3.2 1C1R Unit Tests, sec 5.3.4 4C4R Unit Tests, sec 5.3.6 2C1R Unit Tests, sec 5.3.8 16C4R Unit Tests
-
Local File Pipeline for Media pipeline
- Json File: localMediaPipeline.json
File location: ai_inference/test/configs/raddet/1C1R/localMediaPipeline.json - Pipeline Description:
input -> decode -> detection -> tracking -> output
- Json File: localMediaPipeline.json
-
Local File Pipeline for mmWave Radar pipeline
-
Json File: localRadarPipeline.json
File location: ai_inference/test/configs/raddet/1C1R/localRadarPipeline.json -
Pipeline Description:
input -> preprocess -> radar_detection -> clustering -> tracking -> output
-
-
Local File Pipeline for
Camera + Radar(1C+1R)Sensor fusion pipeline- Json File: localFusionPipeline.json
File location: ai_inference/test/configs/raddet/1C1R/localFusionPipeline.json - Pipeline Description:
input | -> decode -> detector -> tracker -> | | -> preprocess -> radar_detection -> clustering -> tracking -> | -> coordinate_transform->fusion -> output
- Json File: localFusionPipeline.json
-
Local File Pipeline for
Camera + Radar(4C+4R)Sensor fusion pipeline- Json File: localFusionPipeline.json
File location: ai_inference/test/configs/raddet/4C4R/localFusionPipeline.json - Pipeline Description:
input | -> decode -> detector -> tracker -> | | -> radarOfflineResults -> | -> coordinate_transform->fusion -> | input | -> decode -> detector -> tracker -> | | | -> radarOfflineResults -> | -> coordinate_transform->fusion -> | -> output input | -> decode -> detector -> tracker -> | | | -> radarOfflineResults -> | -> coordinate_transform->fusion -> | input | -> decode -> detector -> tracker -> | | | -> radarOfflineResults -> | -> coordinate_transform->fusion -> |
- Json File: localFusionPipeline.json
-
Local File Pipeline for
Camera + Radar(2C+1R)Sensor fusion pipeline-
Json File: localFusionPipeline.json
File location: ai_inference/test/configs/raddet/2C1R/localFusionPipeline.json -
Pipeline Description:
| -> decode -> detector -> tracker -> | | input | -> decode -> detector -> tracker -> | -> Camera2CFusion -> fusion -> | -> output | -> preprocess -> radar_detection -> clustering -> tracking -> | |
-
-
Local File Pipeline for
Camera + Radar(16C+4R)Sensor fusion pipeline-
Json File: localFusionPipeline.json
File location: ai_inference/test/configs/raddet/16C4R/localFusionPipeline.json -
Pipeline Description:
| -> decode -> detector -> tracker -> | | | -> decode -> detector -> tracker -> | | input | -> decode -> detector -> tracker -> |-> Camera4CFusion -> fusion -> | | -> decode -> detector -> tracker -> | | | -> radarOfflineResults -> | | | -> decode -> detector -> tracker -> | | | -> decode -> detector -> tracker -> | | input | -> decode -> detector -> tracker -> |-> Camera4CFusion -> fusion -> | | -> decode -> detector -> tracker -> | | | -> radarOfflineResults -> | | -> output | -> decode -> detector -> tracker -> | | | -> decode -> detector -> tracker -> | | input | -> decode -> detector -> tracker -> |-> Camera4CFusion -> fusion -> | | -> decode -> detector -> tracker -> | | | -> radarOfflineResults -> | | | -> decode -> detector -> tracker -> | | | -> decode -> detector -> tracker -> | | input | -> decode -> detector -> tracker -> |-> Camera4CFusion -> fusion -> | | -> decode -> detector -> tracker -> | | | -> radarOfflineResults -> | |
-
Open a terminal, run the following commands:
cd $PROJ_DIR
sudo bash -x run_service_bare.sh
# Output logs:
[2023-06-26 14:34:42.970] [DualSinks] [info] MaxConcurrentWorkload sets to 1
[2023-06-26 14:34:42.970] [DualSinks] [info] MaxPipelineLifeTime sets to 300s
[2023-06-26 14:34:42.970] [DualSinks] [info] Pipeline Manager pool size sets to 1
[2023-06-26 14:34:42.970] [DualSinks] [trace] [HTTP]: uv loop inited
[2023-06-26 14:34:42.970] [DualSinks] [trace] [HTTP]: Init completed
[2023-06-26 14:34:42.971] [DualSinks] [trace] [HTTP]: http server at 0.0.0.0:50051
[2023-06-26 14:34:42.971] [DualSinks] [trace] [HTTP]: running starts
[2023-06-26 14:34:42.971] [DualSinks] [info] Server set to listen on 0.0.0.0:50052
[2023-06-26 14:34:42.972] [DualSinks] [info] Server starts 1 listener. Listening starts
[2023-06-26 14:34:42.972] [DualSinks] [trace] Connection handle with uid 0 created
[2023-06-26 14:34:42.972] [DualSinks] [trace] Add connection with uid 0 into the conn pool
NOTE-1: workload (default as 4) can be configured in file:
$PROJ_DIR/ai_inference/source/low_latency_server/AiInference.config
...
[Pipeline]
maxConcurrentWorkload=4NOTE-2 : to stop service, run the following commands:
sudo pkill HceAll executable files are located at: $PROJ_DIR/build/bin
Usage:
Usage: CRSensorFusionDisplay <host> <port> <json_file> <total_stream_num> <repeats> <data_path> <display_type> [<save_flag: 0 | 1>] [<pipeline_repeats>] [<fps_window: unsigned>] [<cross_stream_num>] [<warmup_flag: 0 | 1>] [<logo_flag: 0 | 1>]
--------------------------------------------------------------------------------
Environment requirement:
unset http_proxy;unset https_proxy;unset HTTP_PROXY;unset HTTPS_PROXY
- host: use
127.0.0.1to call from localhost. - port: configured as
50052, can be changed by modifying file:$PROJ_DIR/ai_inference/source/low_latency_server/AiInference.configbefore starting the service. - json_file: AI pipeline topology file.
- total_stream_num: to control the input streams.
- repeats: to run tests multiple times, so that we can get more accurate performance.
- data_path: multi-sensor binary files folder for input.
- display_type: support for
media,radar,media_radar,media_fusioncurrently. - save_flag: whether to save display results into video.
- pipeline_repeats: pipeline repeats number.
- fps_window: The number of frames processed in the past is used to calculate the fps. 0 means all frames processed are used to calculate the fps.
- cross_stream_num: the stream number that run in a single pipeline.
- warmup_flag: warm up flag before pipeline start.
- logo_flag: whether to add intel logo in display.
More specifically, open another terminal, run the following commands:
# multi-sensor inputs test-case
sudo -E ./build/bin/CRSensorFusionDisplay 127.0.0.1 50052 ai_inference/test/configs/raddet/1C1R/libradar/localFusionPipeline_libradar.json 1 1 /path-to-dataset media_fusionNote: Run with
rootif users want to get the GPU utilization profiling.
In this section, the unit tests of four major components will be described: media processing, radar processing, fusion pipeline without display and other tools for intermediate results.
Usage:
Usage: testGRPCLocalPipeline <host> <port> <json_file> <total_stream_num> <repeats> <data_path> <media_type> [<pipeline_repeats>] [<cross_stream_num>] [<warmup_flag: 0 | 1>]
--------------------------------------------------------------------------------
Environment requirement:
unset http_proxy;unset https_proxy;unset HTTP_PROXY;unset HTTPS_PROXY
-
host: use
127.0.0.1to call from localhost. -
port: configured as
50052, can be changed by modifying file:$PROJ_DIR/ai_inference/source/low_latency_server/AiInference.configbefore starting the service. -
json_file: AI pipeline topology file.
-
total_stream_num: to control the input video streams.
-
repeats: to run tests multiple times, so that we can get more accurate performance.
-
abs_data_path: input data, remember to use absolute data path, or it may cause error.
-
media_type: support for
image,video,multisensorcurrently. -
pipeline_repeats: the pipeline repeats number.
-
cross_stream_num: the stream number that run in a single pipeline.
Open another terminal, run the following commands:
# media test-case
./build/bin/testGRPCLocalPipeline 127.0.0.1 50052 ai_inference/test/configs/raddet/1C1R/localMediaPipeline.json 1 1 /path-to-dataset multisensorOpen another terminal, run the following commands:
# radar test-case
./build/bin/testGRPCLocalPipeline 127.0.0.1 50052 ai_inference/test/configs/raddet/1C1R/libradar/localRadarPipeline_libradar.json 1 1 /path-to-dataset multisensorOpen another terminal, run the following commands:
# fusion test-case
./build/bin/testGRPCLocalPipeline 127.0.0.1 50052 ai_inference/test/configs/raddet/1C1R/libradar/localFusionPipeline_libradar.json 1 1 /path-to-dataset multisensor./build/bin/testGRPCLocalPipeline 127.0.0.1 50052 ai_inference/test/configs/gpuLocalVPLDecodeImagePipeline.json 1 1000 $PROJ_DIR/_images/images image./build/bin/MediaDisplay 127.0.0.1 50052 ai_inference/test/configs/raddet/1C1R/localMediaPipeline.json 1 1 /path-to-dataset multisensor./build/bin/testGRPCLocalPipeline 127.0.0.1 50052 ai_inference/test/configs/raddet/1C1R/libradar/localRadarPipeline_pcl_libradar.json 1 1 /path-to-dataset multisensor./build/bin/testGRPCLocalPipeline 127.0.0.1 50052 ai_inference/test/configs/raddet/1C1R/libradar/localRadarPipeline_saveResult_libradar.json 1 1 /path-to-dataset multisensor./build/bin/testGRPCLocalPipeline 127.0.0.1 50052 ai_inference/test/configs/raddet/1C1R/libradar/localRadarPipeline_savepcl_libradar.json 1 1 /path-to-dataset multisensor./build/bin/testGRPCLocalPipeline 127.0.0.1 50052 ai_inference/test/configs/raddet/1C1R/libradar/localRadarPipeline_saveClustering_libradar.json 1 1 /path-to-dataset multisensor## no need to run the service
export HVA_NODE_DIR=$PWD/build/lib
source /opt/intel/openvino_2024/setupvars.sh
source /opt/intel/oneapi/setvars.sh
./build/bin/testRadarPerformance ai_inference/test/configs/raddet/1C1R/libradar/localRadarPipeline_libradar.json /path-to-dataset 1./build/bin/CRSensorFusionRadarDisplay 127.0.0.1 50052 ai_inference/test/configs/raddet/1C1R/libradar/localRadarPipeline_savepcl_libradar.json 1 1 /path-to-dataset pcl./build/bin/CRSensorFusionRadarDisplay 127.0.0.1 50052 ai_inference/test/configs/raddet/1C1R/libradar/localRadarPipeline_saveClustering_libradar.json 1 1 /path-to-dataset clustering./build/bin/CRSensorFusionRadarDisplay 127.0.0.1 50052 ai_inference/test/configs/raddet/1C1R/libradar/localRadarPipeline_libradar.json 1 1 /path-to-dataset trackingAll executable files are located at: $PROJ_DIR/build/bin
Usage:
Usage: CRSensorFusion4C4RDisplay <host> <port> <json_file> <additional_json_file> <total_stream_num> <repeats> <data_path> <display_type> [<save_flag: 0 | 1>] [<pipeline_repeats>] [<cross_stream_num>] [<warmup_flag: 0 | 1>] [<logo_flag: 0 | 1>]
--------------------------------------------------------------------------------
Environment requirement:
unset http_proxy;unset https_proxy;unset HTTP_PROXY;unset HTTPS_PROXY
- host: use
127.0.0.1to call from localhost. - port: configured as
50052, can be changed by modifying file:$PROJ_DIR/ai_inference/source/low_latency_server/AiInference.configbefore starting the service. - json_file: AI pipeline topology file.
- additional_json_file: AI pipeline additional topology file.
- total_stream_num: to control the input streams.
- repeats: to run tests multiple times, so that we can get more accurate performance.
- data_path: multi-sensor binary files folder for input.
- display_type: support for
media,radar,media_radar,media_fusioncurrently. - save_flag: whether to save display results into video.
- pipeline_repeats: pipeline repeats number.
- cross_stream_num: the stream number that run in a single pipeline.
- warmup_flag: warm up flag before pipeline start.
- logo_flag: whether to add intel logo in display.
More specifically, open another terminal, run the following commands:
# multi-sensor inputs test-case
sudo -E ./build/bin/CRSensorFusion4C4RDisplay 127.0.0.1 50052 ai_inference/test/configs/raddet/4C4R/localFusionPipeline.json ai_inference/test/configs/raddet/4C4R/localFusionPipeline_npu.json 4 1 /path-to-dataset media_fusionNote: Run with
rootif users want to get the GPU utilization profiling.
To run 4C+4R with cross-stream support, for example, process 3 streams on GPU with 1 thread and the other 1 stream on NPU in another thread, run the following command:
# multi-sensor inputs test-case
sudo -E ./build/bin/CRSensorFusion4C4RDisplayCrossStream 127.0.0.1 50052 ai_inference/test/configs/raddet/4C4R/cross-stream/localFusionPipeline.json ai_inference/test/configs/raddet/4C4R/cross-stream/localFusionPipeline_npu.json 4 1 /path-to-dataset media_fusion save_flag 1 3For the command above, if you encounter problems with opencv due to remote connection, you can try running the following command which sets the save flag to 2 meaning that the video will be saved locally without needing to show on the screen:
# multi-sensor inputs test-case
sudo -E ./build/bin/CRSensorFusion4C4RDisplayCrossStream 127.0.0.1 50052 ai_inference/test/configs/raddet/4C4R/cross-stream/localFusionPipeline.json ai_inference/test/configs/raddet/4C4R/cross-stream/localFusionPipeline_npu.json 4 1 /path-to-dataset media_fusion 2 1 3In this section, the unit tests of two major components will be described: fusion pipeline without display and media processing.
Usage:
Usage: testGRPC4C4RPipeline <host> <port> <json_file> <additional_json_file> <total_stream_num> <repeats> <data_path> [<pipeline_repeats>] [<cross_stream_num>] [<warmup_flag: 0 | 1>]
--------------------------------------------------------------------------------
Environment requirement:
unset http_proxy;unset https_proxy;unset HTTP_PROXY;unset HTTPS_PROXY
- host: use
127.0.0.1to call from localhost. - port: configured as
50052, can be changed by modifying file:$PROJ_DIR/ai_inference/source/low_latency_server/AiInference.configbefore starting the service. - json_file: AI pipeline topology file.
- additional_json_file: AI pipeline additional topology file.
- total_stream_num: to control the input video streams.
- repeats: to run tests multiple times, so that we can get more accurate performance.
- data_path: input data, remember to use absolute data path, or it may cause error.
- pipeline_repeats: pipeline repeats number.
- cross_stream_num: the stream number that run in a single pipeline.
- warmup_flag: warm up flag before pipeline start.
Set offline radar CSV file path
First, set the offline radar CSV file path in both localFusionPipeline.json File location: ai_inference/test/configs/raddet/4C4R/localFusionPipeline.json and localFusionPipeline_npu.json File location: ai_inference/test/configs/raddet/4C4R/localFusionPipeline_npu.json with "Configure String": "RadarDataFilePath=(STRING)/opt/radarResults.csv" like below:
{
"Node Class Name": "RadarResultReadFileNode",
......
"Configure String": "......;RadarDataFilePath=(STRING)/opt/radarResults.csv"
},The method for generating offline radar files is described in 5.3.2.7 Save radar pipeline tracking results. Or you can use a pre-prepared data with the command below:
sudo cp $PROJ_DIR/ai_inference/deployment/datasets/radarResults.csv /optOpen another terminal, run the following commands:
# fusion test-case
sudo -E ./build/bin/testGRPC4C4RPipeline 127.0.0.1 50052 ai_inference/test/configs/raddet/4C4R/localFusionPipeline.json ai_inference/test/configs/raddet/4C4R/localFusionPipeline_npu.json 4 1 /path-to-datasetOpen another terminal, run the following commands:
# fusion test-case
sudo -E ./build/bin/testGRPC4C4RPipelineCrossStream 127.0.0.1 50052 ai_inference/test/configs/raddet/4C4R/cross-stream/localFusionPipeline.json ai_inference/test/configs/raddet/4C4R/cross-stream/localFusionPipeline_npu.json 4 1 /path-to-dataset 1 3 Open another terminal, run the following commands:
# media test-case
sudo -E ./build/bin/testGRPC4C4RPipeline 127.0.0.1 50052 ai_inference/test/configs/raddet/4C4R/localMediaPipeline.json ai_inference/test/configs/raddet/4C4R/localMediaPipeline_npu.json 4 1 /path-to-dataset# cpu detection test-case
sudo -E ./build/bin/testGRPCLocalPipeline 127.0.0.1 50052 ai_inference/test/configs/raddet/UTCPUDetection-yoloxs.json 1 1 /path-to-dataset multisensor# gpu detection test-case
sudo -E ./build/bin/testGRPCLocalPipeline 127.0.0.1 50052 ai_inference/test/configs/raddet/UTGPUDetection-yoloxs.json 1 1 /path-to-dataset multisensor# npu detection test-case
sudo -E ./build/bin/testGRPCLocalPipeline 127.0.0.1 50052 ai_inference/test/configs/raddet/UTNPUDetection-yoloxs.json 1 1 /path-to-dataset multisensorAll executable files are located at: $PROJ_DIR/build/bin
Usage:
Usage: CRSensorFusion2C1RDisplay <host> <port> <json_file> <total_stream_num> <repeats> <data_path> <display_type> [<save_flag: 0 | 1>] [<pipeline_repeats>] [<fps_window: unsigned>] [<cross_stream_num>] [<warmup_flag: 0 | 1>] [<logo_flag: 0 | 1>]
--------------------------------------------------------------------------------
Environment requirement:
unset http_proxy;unset https_proxy;unset HTTP_PROXY;unset HTTPS_PROXY- host: use
127.0.0.1to call from localhost. - port: configured as
50052, can be changed by modifying file:$PROJ_DIR/ai_inference/source/low_latency_server/AiInference.configbefore starting the service. - json_file: AI pipeline topology file.
- total_stream_num: to control the input streams.
- repeats: to run tests multiple times, so that we can get more accurate performance.
- data_path: multi-sensor binary files folder for input.
- display_type: support for
media,radar,media_radar,media_fusioncurrently. - save_flag: whether to save display results into video.
- pipeline_repeats: pipeline repeats number.
- fps_window: The number of frames processed in the past is used to calculate the fps. 0 means all frames processed are used to calculate the fps.
- cross_stream_num: the stream number that run in a single pipeline.
- warmup_flag: warm up flag before pipeline start.
- logo_flag: whether to add intel logo in display.
More specifically, open another terminal, run the following commands:
# multi-sensor inputs test-case
sudo -E ./build/bin/CRSensorFusion2C1RDisplay 127.0.0.1 50052 ai_inference/test/configs/raddet/2C1R/localFusionPipeline_libradar.json 1 1 /path-to-dataset media_fusionNote: Run with
rootif users want to get the GPU utilization profiling.
In this section, the unit tests of three major components will be described: media processing, radar processing, fusion pipeline without display.
Usage:
Usage: testGRPC2C1RPipeline <host> <port> <json_file> <total_stream_num> <repeats> <data_path> <media_type> [<pipeline_repeats>] [<cross_stream_num>] [<warmup_flag: 0 | 1>]
--------------------------------------------------------------------------------
Environment requirement:
unset http_proxy;unset https_proxy;unset HTTP_PROXY;unset HTTPS_PROXY
-
host: use
127.0.0.1to call from localhost. -
port: configured as
50052, can be changed by modifying file:$PROJ_DIR/ai_inference/source/low_latency_server/AiInference.configbefore starting the service. -
json_file: ai pipeline topology file.
-
total_stream_num: to control the input video streams.
-
repeats: to run tests multiple times, so that we can get more accurate performance.
-
abs_data_path: input data, remember to use absolute data path, or it may cause error.
-
media_type: support for
image,video,multisensorcurrently. -
pipeline_repeats: the pipeline repeats number.
-
cross_stream_num: the stream number that run in a single pipeline.
Open another terminal, run the following commands:
# media test-case
./build/bin/testGRPC2C1RPipeline 127.0.0.1 50052 ./ai_inference/test/configs/raddet/2C1R/localMediaPipeline.json 1 1 /path-to-dataset multisensorOpen another terminal, run the following commands:
# radar test-case
./build/bin/testGRPC2C1RPipeline 127.0.0.1 50052 ./ai_inference/test/configs/raddet/2C1R/localRadarPipeline_libradar.json 1 1 /path-to-dataset multisensorOpen another terminal, run the following commands:
# fusion test-case
./build/bin/testGRPC2C1RPipeline 127.0.0.1 50052 ./ai_inference/test/configs/raddet/2C1R/localFusionPipeline_libradar.json 1 1 /path-to-dataset multisensorAll executable files are located at: $PROJ_DIR/build/bin
Usage:
Usage: CRSensorFusion16C4RDisplay <host> <port> <json_file> <total_stream_num> <repeats> <data_path> <display_type> [<save_flag: 0 | 1>] [<pipeline_repeats>] [<cross_stream_num>] [<warmup_flag: 0 | 1>] [<logo_flag: 0 | 1>]
--------------------------------------------------------------------------------
Environment requirement:
unset http_proxy;unset https_proxy;unset HTTP_PROXY;unset HTTPS_PROXY
- host: use
127.0.0.1to call from localhost. - port: configured as
50052, can be changed by modifying file:$PROJ_DIR/ai_inference/source/low_latency_server/AiInference.configbefore starting the service. - json_file: AI pipeline topology file.
- total_stream_num: to control the input streams.
- repeats: to run tests multiple times, so that we can get more accurate performance.
- data_path: multi-sensor binary files folder for input.
- display_type: support for
media,radar,media_radar,media_fusioncurrently. - save_flag: whether to save display results into video.
- pipeline_repeats: pipeline repeats number.
- cross_stream_num: the stream number that run in a single pipeline.
- warmup_flag: warm up flag before pipeline start.
- logo_flag: whether to add intel logo in display.
More specifically, open another terminal, run the following commands:
# multi-sensor inputs test-case
sudo -E ./build/bin/CRSensorFusion16C4RDisplay 127.0.0.1 50052 ./ai_inference/test/configs/raddet/16C4R/localFusionPipeline.json 4 1 /path-to-dataset media_fusionNote: Run with
rootif users want to get the GPU utilization profiling.
In this section, the unit tests of two major components will be described: fusion pipeline without display and media processing.
Usage:
Usage: testGRPC16C4RPipeline <host> <port> <json_file> <total_stream_num> <repeats> <data_path> [<pipeline_repeats>] [<cross_stream_num>] [<warmup_flag: 0 | 1>]
--------------------------------------------------------------------------------
Environment requirement:
unset http_proxy;unset https_proxy;unset HTTP_PROXY;unset HTTPS_PROXY
- host: use
127.0.0.1to call from localhost. - port: configured as
50052, can be changed by modifying file:$PROJ_DIR/ai_inference/source/low_latency_server/AiInference.configbefore starting the service. - json_file: AI pipeline topology file.
- total_stream_num: to control the input video streams.
- repeats: to run tests multiple times, so that we can get more accurate performance.
- data_path: input data, remember to use absolute data path, or it may cause error.
- pipeline_repeats: pipeline repeats number.
- cross_stream_num: the stream number that run in a single pipeline.
- warmup_flag: warm up flag before pipeline start.
Set offline radar CSV file path
First, set the offline radar CSV file path in both localFusionPipeline.json File location: ai_inference/test/configs/raddet/16C4R/localFusionPipeline.json with "Configure String": "RadarDataFilePath=(STRING)/opt/radarResults.csv" like below:
{
"Node Class Name": "RadarResultReadFileNode",
......
"Configure String": "......;RadarDataFilePath=(STRING)/opt/radarResults.csv"
},The method for generating offline radar files is described in 5.3.2.7 Save radar pipeline tracking results. Or you can use a pre-prepared data with the command below:
sudo cp $PROJ_DIR/ai_inference/deployment/datasets/radarResults.csv /optOpen another terminal, run the following commands:
# fusion test-case
sudo -E ./build/bin/testGRPC16C4RPipeline 127.0.0.1 50052 ai_inference/test/configs/raddet/16C4R/localFusionPipeline.json 4 1 /path-to-datasetOpen another terminal, run the following commands:
# media test-case
sudo -E ./build/bin/testGRPC16C4RPipeline 127.0.0.1 50052 ai_inference/test/configs/raddet/16C4R/localMediaPipeline.json 4 1 /path-to-dataset# Run service with the following command:
sudo bash run_service_bare_log.sh
# Open another terminal, run the command below:
sudo -E ./build/bin/testGRPCLocalPipeline 127.0.0.1 50052 ai_inference/test/configs/raddet/1C1R/libradar/localFusionPipeline_libradar.json 1 10 /path-to-dataset multisensorFps and average latency will be calculated.
# Run service with the following command:
sudo bash run_service_bare_log.sh
# Open another terminal, run the command below:
sudo -E ./build/bin/testGRPC4C4RPipeline 127.0.0.1 50052 ai_inference/test/configs/raddet/4C4R/localFusionPipeline.json ai_inference/test/configs/raddet/4C4R/localFusionPipeline_npu.json 4 10 /path-to-datasetFps and average latency will be calculated.
# Run service with the following command:
sudo bash run_service_bare_log.sh
# Open another terminal, run the command below:
sudo -E ./build/bin/testGRPC2C1RPipeline 127.0.0.1 50052 ./ai_inference/test/configs/raddet/2C1R/localFusionPipeline_libradar.json 1 10 /path-to-dataset multisensorFps and average latency will be calculated.
# Run service with the following command:
sudo bash run_service_bare_log.sh
# Open another terminal, run the command below:
sudo -E ./build/bin/testGRPC16C4RPipeline 127.0.0.1 50052 ai_inference/test/configs/raddet/16C4R/localFusionPipeline.json 4 10 /path-to-datasetFps and average latency will be calculated.
NOTE : change workload configuration to 1 in file:
$PROJ_DIR/ai_inference/source/low_latency_server/AiInference.config
...
[Pipeline]
maxConcurrentWorkload=1Run the service first, and open another terminal, run the command below:
# 1C1R without display
sudo -E ./build/bin/testGRPCLocalPipeline 127.0.0.1 50052 ai_inference/test/configs/raddet/1C1R/libradar/localFusionPipeline_libradar.json 1 100 /path-to-dataset multisensor 100NOTE : change workload configuration to 4 in file:
$PROJ_DIR/ai_inference/source/low_latency_server/AiInference.config
...
[Pipeline]
maxConcurrentWorkload=4Run the service first, and open another terminal, run the command below:
# 4C4R without display
sudo -E ./build/bin/testGRPC4C4RPipeline 127.0.0.1 50052 ai_inference/test/configs/raddet/4C4R/localFusionPipeline.json ai_inference/test/configs/raddet/4C4R/localFusionPipeline_npu.json 4 100 /path-to-dataset 100NOTE : change workload configuration to 1 in file: $PROJ_DIR/ai_inference/source/low_latency_server/AiInference.config
...
[Pipeline]
maxConcurrentWorkload=1Run the service first, and open another terminal, run the command below:
# 2C1R without display
sudo -E ./build/bin/testGRPC2C1RPipeline 127.0.0.1 50052 ./ai_inference/test/configs/raddet/2C1R/localFusionPipeline_libradar.json 1 100 /path-to-dataset multisensor 100NOTE : change workload configuration to 4 in file: $PROJ_DIR/ai_inference/source/low_latency_server/AiInference.config
...
[Pipeline]
maxConcurrentWorkload=4Run the service first, and open another terminal, run the command below:
# 16C4R without display
sudo -E ./build/bin/testGRPC16C4RPipeline 127.0.0.1 50052 ai_inference/test/configs/raddet/16C4R/localFusionPipeline.json 4 100 /path-to-dataset 100Install Docker Engine and Docker Compose according to the guide on the official website.
Before you install Docker Engine for the first time on a new host machine, you need to set up the Docker apt repository. Afterward, you can install and update Docker from the repository.
- Set up Docker's
aptrepository.
# Add Docker's official GPG key:
sudo -E apt-get update
sudo -E apt-get install ca-certificates curl
sudo -E install -m 0755 -d /etc/apt/keyrings
sudo -E curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo -E apt-get update- Install the Docker packages.
To install the latest version, run:
sudo -E apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin- Verify that the installation is successful by running the
hello-worldimage:
sudo docker run hello-worldThis command downloads a test image and runs it in a container. When the container runs, it prints a confirmation message and exits.
- Add user to group
sudo usermod -aG docker $USER
newgrp docker- Then pull base image
docker pull ubuntu:22.04bash install_driver_related_libs.shNote that above driver is the BKC(best known configuration) version, which can get the best performance but with many restrictions when installing the driver and building the docker image.
If BKC is not needed and other versions of the driver are already installed on the machine, you don't need to do this step.
Note that the default username is
openvinoand password isintelin docker image.
bash build_docker.sh <IMAGE_TAG, default tfcc:latest> <DOCKERFILE, default Dockerfile_TFCC.dockerfile> <BASE, default ubuntu> <BASE_VERSION, default 22.04> bash run_docker.sh <DOCKER_IMAGE, default tfcc:latest> <NPU_ON, default true>
cd $PROJ_DIR/docker
bash build_docker.sh tfcc:latest Dockerfile_TFCC.dockerfile
bash run_docker.sh tfcc:latest false
# After the run is complete, the container ID will be output, or you can view it through docker ps docker exec -it <container id> /bin/bashdocker cp /path/to/dataset <container id>:/path/to/dataset
Note that the default username is
openvinoand password isintelin docker image.
Modify proxy, VIDEO_GROUP_ID and RENDER_GROUP_ID in tfcc.env.
# proxy settings
https_proxy=
http_proxy=
# base image settings
BASE=ubuntu
BASE_VERSION=22.04
# group IDs for various services
VIDEO_GROUP_ID=44
RENDER_GROUP_ID=110
# display settings
DISPLAY=$DISPLAYYou can get VIDEO_GROUP_ID and RENDER_GROUP_ID with the following command:
# VIDEO_GROUP_ID
echo $(getent group video | awk -F: '{printf "%s\n", $3}')
# RENDER_GROUP_ID
echo $(getent group render | awk -F: '{printf "%s\n", $3}')cd $PROJ_DIR/docker
docker compose up tfcc -ddocker compose exec tfcc /bin/bashFind the container name or ID:
docker compose psSample output:
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
docker-tfcc-1 tfcc:latest "/bin/bash" tfcc 4 minutes ago Up 9 secondscopy dataset
docker cp /path/to/dataset docker-tfcc-1:/path/to/datasetEnter the project directory /home/openvino/metro-2.0 then run bash -x build.sh to build the project. Then following the guides sec 5. Run Sensor Fusion Application to run sensor fusion application.
Some of the code is referenced from the following projects:
- IGT GPU Tools (MIT License)
- Intel DL Streamer (MIT License)
- Open Model Zoo (Apache-2.0 License)


















