A structured, hands-on learning resource covering perception systems, functional safety, cybersecurity, and AI safety for autonomous vehicles — aligned with ISO 26262, ISO 21448, ISO/SAE 21434, and ISO/PAS 8800.
Author: Milin Patel · Hochschule Kempten — University of Applied Sciences
- Overview
- Repository Structure
- Modules
- Templates
- Learning Paths
- Getting Started
- Standards and Regulations Covered
- Contributing
- Citation
- License
Autonomous vehicles depend on the safe and secure interaction of perception, decision-making, and control systems. This repository provides 33 Jupyter notebooks and 3 professional templates organized into 7 progressive modules that cover:
- Perception — How autonomous vehicles sense their environment (cameras, LiDAR, radar, sensor fusion)
- Failure analysis — Real-world incidents (Uber, Tesla, Cruise), adversarial attacks, and edge cases
- Functional safety — Systematic hazard analysis using ISO 26262 (HARA, FMEA, ASIL, V&V)
- SOTIF — Performance limitations and triggering conditions under ISO 21448
- Cybersecurity — Threat modeling and attack surface analysis per ISO/SAE 21434
- AI safety — Uncertainty quantification, calibration, and trustworthiness under ISO/PAS 8800
- Integration — V2X communication, explainability (XAI), regulations (UNECE R155/R156, EU AI Act)
Each notebook is self-contained, executable in Google Colab with a single click, and includes explanations, working code, visualizations, and exercises.
Autonomous-Driving_AI-Safety-and-Security/
│
├── README.md
├── LICENSE
├── CONTRIBUTING.md
├── requirements.txt
├── .gitignore
│
├── 01_Perception_Systems/
│ └── notebooks/
│ ├── 01_sae_automation_levels.ipynb
│ ├── 02_sensor_technologies.ipynb
│ ├── 03_object_detection.ipynb
│ ├── 04_sensor_fusion.ipynb
│ ├── 05_pedestrian_detection.ipynb
│ ├── 06_lidar_sensor_fundamentals.ipynb
│ └── 07_dataset_overview.ipynb
│
├── 02_Failure_Analysis/
│ └── notebooks/
│ ├── 01_av_failure_case_studies.ipynb
│ ├── 02_ood_detection.ipynb
│ ├── 03_corner_cases_edge_cases.ipynb
│ └── 04_adversarial_attacks.ipynb
│
├── 03_Functional_Safety/
│ └── notebooks/
│ ├── 01_iso_26262_fundamentals.ipynb
│ ├── 02_hara_methodology.ipynb
│ ├── 03_fmea_analysis.ipynb
│ └── 04_verification_validation.ipynb
│
├── 04_SOTIF/
│ └── notebooks/
│ ├── 01_sotif_fundamentals.ipynb
│ ├── 02_scenario_analysis.ipynb
│ ├── 03_ood_detection_sotif.ipynb
│ └── 04_simulation_sotif_validation.ipynb
│
├── 05_Cybersecurity/
│ └── notebooks/
│ ├── 01_automotive_cybersecurity.ipynb
│ ├── 02_tara_methodology.ipynb
│ └── 03_attack_surface_analysis.ipynb
│
├── 06_AI_Safety/
│ └── notebooks/
│ ├── 01_ai_safety_standards.ipynb
│ ├── 02_uncertainty_types.ipynb
│ ├── 03_mc_dropout_ensembles.ipynb
│ ├── 04_calibration_reliability.ipynb
│ └── 05_safety_validation_testing.ipynb
│
├── 07_Integration_Deployment/
│ └── notebooks/
│ ├── 01_v2x_communication.ipynb
│ ├── 02_explainability_xai.ipynb
│ ├── 03_standards_integration.ipynb
│ ├── 04_industry_deployment.ipynb
│ ├── 05_odd_runtime_monitoring.ipynb
│ ├── 06_standards_gaps.ipynb
│ └── 07_regulations_type_approval.ipynb
│
└── templates/
├── HARA_Template.md
├── TARA_Template.md
└── SOTIF_Analysis_Template.md
How autonomous vehicles perceive their surroundings through sensors and algorithms.
Learning from real-world failures and understanding robustness challenges.
Systematic methods for identifying, classifying, and mitigating safety risks.
Addressing performance limitations and insufficiencies of the intended functionality.
Protecting autonomous vehicles against cyber threats across the development lifecycle.
Ensuring trustworthiness, reliability, and safety of AI/ML components.
Bringing safety, security, and AI together — from standards compliance to real-world deployment.
Ready-to-use analysis templates for professional safety and security work. Each template follows the structure defined in its reference standard, includes worked examples, and provides guidance for completing each section.
| Template | Standard | Purpose |
|---|---|---|
| HARA Template | ISO 26262 | Hazard Analysis and Risk Assessment — hazard identification, ASIL determination, safety goals |
| TARA Template | ISO/SAE 21434 | Threat Analysis and Risk Assessment — asset identification, threat scenarios, CAL determination |
| SOTIF Analysis Template | ISO 21448 | SOTIF evaluation — ODD definition, triggering conditions, scenario categorization (S1–S4) |
Choose a path based on your goals:
Complete path (all 33 notebooks):
01 Perception → 02 Failure Analysis → 03 Functional Safety → 04 SOTIF → 05 Cybersecurity → 06 AI Safety → 07 Integration
Perception and AI focus (16 notebooks):
01 Perception → 02 Failure Analysis → 06 AI Safety
Safety standards focus (19 notebooks):
03 Functional Safety → 04 SOTIF → 05 Cybersecurity → 07 Integration
Quick standards overview (4 notebooks):
03/01ISO 26262 Fundamentals →04/01SOTIF Fundamentals →05/01Automotive Cybersecurity →06/01AI Safety Standards
- Python 3.8 or later
- Basic familiarity with Python, NumPy, and Matplotlib
- Background in engineering, computer science, or automotive systems (helpful but not required)
Click any Open in Colab badge in the module tables above. No local setup is needed — each notebook installs its own dependencies automatically.
# 1. Clone the repository
git clone https://github.com/milinpatel07/Autonomous-Driving_AI-Safety-and-Security.git
cd Autonomous-Driving_AI-Safety-and-Security
# 2. Create and activate a virtual environment
python -m venv venv
source venv/bin/activate # Linux / macOS
venv\Scripts\activate # Windows
# 3. Install dependencies
pip install -r requirements.txt
# 4. Launch Jupyter
jupyter labGPU support: For local GPU acceleration, install PyTorch with CUDA following the official instructions.
| Standard | Scope | Modules |
|---|---|---|
| ISO 26262:2018 | Functional safety of road vehicles | 03, 07 |
| ISO 21448:2022 | Safety of the intended functionality (SOTIF) | 04, 07 |
| ISO/SAE 21434:2021 | Cybersecurity engineering for road vehicles | 05, 07 |
| ISO/PAS 8800 | Safety and artificial intelligence | 06, 07 |
| SAE J3016 | Taxonomy of driving automation | 01 |
| ISO/IEC 24028 | Trustworthiness in AI — overview | 06 |
| Regulation | Scope | Module |
|---|---|---|
| UNECE R155 | Cybersecurity management system requirements | 07 |
| UNECE R156 | Software update management system requirements | 07 |
| EU AI Act | Risk-based regulation of artificial intelligence systems | 07 |
Contributions are welcome. Please read CONTRIBUTING.md for guidelines on how to contribute to this project, including how to report issues, suggest improvements, and submit pull requests.
If you use this material in academic or professional work, please cite:
@misc{patel2025av_safety,
author = {Patel, Milin},
title = {Autonomous Driving: AI Safety and Security},
year = {2025},
publisher = {GitHub},
url = {https://github.com/milinpatel07/Autonomous-Driving_AI-Safety-and-Security}
}This project is licensed under the MIT License.
Copyright (c) 2025 Milin Patel