Explainable AI uses data quality measurements and saliency maps to understand the predictions and performance of computer vision models during inference. Data and model explainability provide insights into how predictions are made, helping refine models for efficiency and performance. This application utilizes the Intel OpenVINO™ toolkit, enabling seamless deployment of deep learning models across hardware platforms.
This kit uses the following technology stack:
Check out our AI Reference Kits repository for other kits.
New updates will be added here.
Table of Contents
Star the repository (optional, but recommended :))
Now, let's dive into the steps starting with installing Python. We recommend using Ubuntu to set up and run this project. This project requires Python 3.9 or higher and a few libraries. If you don't have Python installed on your machine, go to https://www.python.org/downloads/ and download the latest version for your operating system. Follow the prompts to install Python, making sure to check the option to add Python to your PATH environment variable.
Install libraries and tools:
sudo apt install git git-lfs gcc python3-venv python3-dev
NOTE: If you are using Windows, you would need to install Microsoft Visual C++ Redistributable also.
To clone the repository, run the following command:
git clone https://github.com/openvinotoolkit/openvino_build_deploy.git
The above will clone the repository into a directory named "openvino_build_deploy" in the current directory. Then, navigate into the directory using the following command:
cd openvino_build_deploy/ai_ref_kits/explainable_ai
Then pull the video sample:
git lfs -X= -I="Cars-FHD.mov" pull
To create a virtual environment, open your terminal or command prompt and navigate to the directory where you want to create the environment. Then, run the following command:
python3 -m venv venv
This will create a new virtual environment named "venv" in the current directory.
Activate the virtual environment using the following command:
source venv/bin/activate # For Unix-based operating systems such as Linux or macOS
NOTE: If you are using Windows, use the venv\Scripts\activate
command instead.
This will activate the virtual environment and change your shell's prompt to indicate that you are now working within that environment.
To install the required packages, run the following commands:
python -m pip install --upgrade pip
pip install -r requirements.txt
_NOTE: Datumaro contains C++ and Rust implementations to improve Python performance: Please ensure you install the Rust toolchain in your system to run this sample.
You can run explainable_ai.ipynb to learn more about the inference process. This notebook contains detailed instructions to run the Explainable AI application, load and analyze a short data section with data quality metrics, and generate saliency maps with an OpenVINO YOLOv8 model using Ultralytics. For the data quality metrics generation, we leverage the open-source toolkit datumaro, including specifically the tutorial here. This edge AI reference kit focuses on a specific digital transportation use case, with an analysis of only a few data quality metrics—please visit the Datumaro tutorials for resources on how to perform advanced data exploration, and explore and remediate more types of data quality issues.
Congratulations! You have successfully set up and run the Explainable AI kit.
Benchmarking provides insight into your model's real-world performance. Performance may vary based on use and configuration.
Benchmarking was performed on an Intel® Xeon® Platinum 8480+ (1 socket, 56 cores) running Ubuntu 22.04.2 LTS. The tests utilized the YOLOv8m model with OpenVINO 2023.0. For complete configuration, please check the Appendix section.
Use the following command to run the benchmark:
!benchmark_app -m $int8_model_det_path -d $device -hint latency -t 30
Replace int8_model_det_path
with the path to your INT8 model and $device with the specific device you're using (CPU, GPU, etc.). This command performs inference on the model for 30 seconds. Run benchmark_app --help
for additional command-line options.
Platform Configurations for Performance Benchmarks for YOLOv8m Model
Type Device | CPU | GPU | ||||
---|---|---|---|---|---|---|
System Board | Intel Corporation D50DNP1SBB |
AAEON UPN-ADLN01 V1.0 220950173 |
Intel® Client Systems NUC12SNKi72 |
Intel Corporation M50CYP2SBSTD |
Intel® Client Systems NUC12SNKi72 |
Intel® Client Systems NUC12SNKi72 |
CPU | Intel(R) Xeon(R) Platinum 8480+ |
Intel® Core™ i3-N305 @ 3.80 GHz |
12th Gen Intel® Core™ i7-12700H @ 2.30 GHz |
Intel(R) Xeon(R) Gold 6348 CPU @ 2.60GHz |
12th Gen Intel® Core™ i7-12700H @ 2.30 GHz |
12th Gen Intel® Core™ i7-12700H @ 2.30 GHz |
Sockets / Physical cores | 1 / 56 (112 Threads) |
1 / 8 (8 Threads) |
1 /14 (20 Threads) |
2 / 28 (56 Threads) |
1 /14 (20 Threads) |
1 /14 (20 Threads) |
HyperThreading / Turbo Setting | Enabled / On | Disabled | Enabled / On | Enabled / On | Enabled / On | Enabled / On |
Memory | 512 GB DDR4 @ 4800 MHz |
16GB DDR5 @4800 MHz |
64 GB DDR4 @ 3200 MHz |
256 GB DDR4 @ 3200 MHz |
64 GB DDR4 @ 3200 MHz |
64 GB DDR4 @ 3200 MHz |
OS | Ubuntu 22.04.2 LTS | Ubuntu 22.04.2 LTS | Windows 11 Enterprise v22H2 |
Ubuntu 22.04.2 LTS | Windows 11 Enterprise v22H2 |
Windows 11 Enterprise v22H2 |
Kernel | 5.15.0-72-generic | 5.15.0-1028-intel-iotg | 22621.1702 | 5.15.0-57-generic | 22621.1702 | 22621.1702 |
Software | OpenVINO 2023.0 | OpenVINO 2023.0 | OpenVINO 2023.0 | OpenVINO 2023.0 | OpenVINO 2023.0 | OpenVINO 2023.0 |
BIOS | Intel Corp. SE5C7411.86B.9525 .D13.2302071333 |
American Megatrends International, LLC. UNADAM10 |
Intel Corp. SNADL357.0053 .2022.1102.1218 |
Intel Corp. SE5C620.86B.01 .01.0007.2210270543 |
Intel Corp. SNADL357.0053 .2022.1102.1218 |
Intel Corp. SNADL357.0053 .2022.1102.1218 |
BIOS Release Date | 02/07/2023 | 12/15/2022 | 11/02/2022 | 10/27/2022 | 11/02/2022 | 11/02/2022 |
GPU | N/A | N/A | 1x Intel® Arc A770™ 16GB, 512 EU |
1x Intel® Iris® Xe Graphics |
1x Intel® Data Center GPU Flex 170 |
1x Intel® Arc A770™ 16GB, 512 EU |
Workload: Codec, resolution, frame rate Model, size (HxW), BS |
Yolov8m Model – input size [640, 640], batch 1 FP16 | int8 |
Yolov8m Model – input size [640, 640], batch 1 FP16 | int8 |
Yolov8m Model – input size [640, 640], batch 1 FP16 | int8 |
Yolov8m Model – input size [640, 640], batch 1 FP16 | int8 |
Yolov8m Model – input size [640, 640], batch 1 FP16 | int8 |
Yolov8m Model – input size [640, 640], batch 1 FP16 | int8 |
TDP | 350W | 15W | 45W | 235W | 45W | 45W |
Benchmark Date | May 31, 2023 | May 29, 2023 | June 15, 2023 | May 29, 2023 | June 15, 2023 | May 29, 2023 |
Benchmarked by | Intel Corporation | Intel Corporation | Intel Corporation | Intel Corporation | Intel Corporation | Intel Corporation |
- DarwinAI Case Study: See how others are implementing Explainable AI practices with Intel.
- Interview on Building Ethical AI with Explainable AI: Learn more about key topics around Explainable AI from Ria, our evangelist and creator of the Explainable AI kit.