Skip to content

Add Prompt Shield Scan - LLM prompt injection detection action#685

Open
markmishaev76 wants to merge 1 commit intosdras:mainfrom
markmishaev76:add-prompt-shield
Open

Add Prompt Shield Scan - LLM prompt injection detection action#685
markmishaev76 wants to merge 1 commit intosdras:mainfrom
markmishaev76:add-prompt-shield

Conversation

@markmishaev76
Copy link

Description

Prompt Shield Scan is a GitHub Action that detects and prevents indirect prompt injection attacks in issues, PRs, and comments.

Why add this?

  • 📊 Addresses OWASP LLM01 - Prompt Injection is the Add link to announcement blog post #1 vulnerability in OWASP's Top 10 for LLM Applications
  • 🛡️ 4-layer defense - Trust filtering, data sanitization, pattern detection, and prompt fencing
  • CI/CD Integration - Automatically scans new issues/PRs in GitHub workflows
  • 📦 Published on Marketplace - Available at GitHub Marketplace

Quick Start

- uses: markmishaev76/Prompt-Shield@v1

This helps DevSecOps teams protect AI-powered applications from prompt injection attacks directly in their CI/CD pipeline.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant