-
Notifications
You must be signed in to change notification settings - Fork 7
Open
Labels
AI safetyCVE mappingdocumentationImprovements or additions to documentationImprovements or additions to documentationgood first issueGood for newcomersGood for newcomersquestionnaire alignmentrisk assessmentvulnerability analysis
Description
Description:
We welcome contributions to help build a comprehensive mapping between AI adversarial threats, relevant Common Vulnerabilities and Exposures (CVEs), and GuardRail assessment questions. This unified effort aims to enhance the practical utility of the GuardRail framework in identifying, analyzing, and mitigating AI-related risks.
Problem Overview:
While many CVEs exist for AI/ML systems, they are not currently linked to specific adversarial threats, making structured vulnerability assessment difficult.
Similarly, GuardRail provides a structured set of questions for evaluating AI systems, but lacks direct mapping to known adversarial threats, limiting its effectiveness in threat modeling.
Contribution Scope:
- Curate a list of CVEs relevant to AI/ML systems.
- Identify and document known AI adversarial threats.
- Map CVEs to corresponding adversarial threats.
- Map adversarial threats to relevant GuardRail questions.
- Propose new threat categories or suggest improvements to question phrasing where gaps are identified.
Impact:
- Enhances traceability between threats, vulnerabilities, and assessment criteria.
- Strengthens the GuardRail framework for real-world adversarial scenarios.
- Supports robust AI system design, auditing, and compliance.
- Improves usability for security and privacy engineers.
Metadata
Metadata
Assignees
Labels
AI safetyCVE mappingdocumentationImprovements or additions to documentationImprovements or additions to documentationgood first issueGood for newcomersGood for newcomersquestionnaire alignmentrisk assessmentvulnerability analysis