Toolkit to assess and determine model provenance
-
Updated
May 2, 2026 - Python
Toolkit to assess and determine model provenance
Security research on AI/ML model vulnerabilities based on DEF CON 33 presentations. Demonstrates pickle RCE, TorchScript exploitation, ONNX injection, model poisoning, and integrated LLM attacks with PromptMap2.
Veil Armor is an enterprise-grade security framework for Large Language Models (LLMs) that provides multi-layered protection against prompt injections, jailbreaks, PII leakage, and sophisticated attack vectors.
Educational research demonstrating weight manipulation attacks in SafeTensors models. Proves format validation alone is insufficient for AI model security.
LLM Sentinel Red Teaming Platform is an enterprise-grade framework for automated security testing of Large Language Models, detecting vulnerabilities such as jailbreaks, prompt injection, and system prompt leakage across multiple providers, with structured attack orchestration, risk scoring, and security reporting to harden models before production
Static security scanner for LoRA adapters (.safetensors) — M1 static analyzer for weight-level anomalies.
Collection of Python security analysis tools for ML models and infrastructure. Includes FGSM harness, model inspection, poison monitoring, and deployment security validation.
GitHub Actions CI/CD pipeline for automated AI model security scanning with Palo Alto Networks Prisma AIRS
Cryptographic provenance verification and binary inspection for ML model artifacts (Safetensors, GGUF, PyTorch) in CI/CD pipelines. Companion toolkit to the Help Net Security column Weaponized Weights.
ML-infrastructure-aware anomaly detection system for protecting model weights against exfiltration, using a 3-layer cascaded architecture (Rules → ML → LLM).
AI supply chain security scanner: detects ML-specific risks (model weight poisoning, dataset contamination, gradient-based backdoors) that traditional scanners miss. The Snyk for AI. govML-governed.
Add a description, image, and links to the model-security topic page so that developers can more easily learn about it.
To associate your repository with the model-security topic, visit your repo's landing page and select "manage topics."