Skip to content

wiserautomation/SupraWall

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

426 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
SupraWall

SupraWall

The rule of law for zero-human companies.

A deterministic security perimeter for AI agents. One line of code. Open source.

PyPI npm License Stars AWS Marketplace

Quickstart · How it works · Frameworks · EU AI Act templates · Cloud · Docs

Paperclip gives your agents a company. SupraWall gives them a constitution.


Why this exists

AI agents now write code, spend money, query databases, and take real-world actions on your behalf — autonomously. The frameworks that orchestrate them (Paperclip, OpenClaw, LangChain, CrewAI, AutoGen, Claude Code) are excellent at making them productive. None of them are responsible for making them safe.

So agents do what unconstrained software has always done: leak credentials, run DROP TABLE users, exfiltrate PII, burn $40k overnight in OpenAI tokens, and fail every compliance audit you'll ever face under the EU AI Act.

SupraWall is a deterministic perimeter that wraps your agent — any agent — and intercepts every tool call before it executes. Not probabilistically. Not via another LLM. Not after the fact. At the boundary, in under 2ms, with a signed audit log.

It is not another guardrail model. It is the rule of law.

And with the EU AI Act enforcement deadline on August 2, 2026, we ship 8 pre-built sector templates covering every Annex III high-risk category — HR, healthcare, education, critical infrastructure, biometrics, law enforcement, migration, justice — plus a DORA template for financial services. Compliance by pip install, not by 200-page PDF.


Quickstart

Python

pip install suprawall-sdk
from suprawall import secure_agent
from your_framework import build_agent  # LangChain, CrewAI, AutoGen, custom — doesn't matter

agent = secure_agent(build_agent(), policy="policies/langchain-safe.json")

# Dangerous tool call from a prompt-injected user message:
agent.invoke("DROP TABLE users")
# ⚡ SupraWall intercepted. BLOCKED. Audit log #A-00847 signed ✓

TypeScript / Node

npm install suprawall
import { secureAgent } from "suprawall";
import { agent } from "./my-agent";

const safe = secureAgent(agent, { policy: "./policies/langchain-safe.json" });
await safe.invoke("DROP TABLE users"); // BLOCKED — pre-execution

That's it. No proxy to deploy. No sidecar. No model fine-tune. The wrapper sits between the agent's reasoning loop and the tool runtime, where deterministic rules belong.


How it works

Three layers, evaluated in order. Local policy always wins.

# Layer Latency What it does
1 Pre-Execution Interception <1ms Every tool call routed through SupraWall before the runtime sees it. Hard-coded — no LLM in the loop.
2 Zero-Trust Policy Enforcement <2ms Budget caps, PII scrubbing, SQL/shell injection blocks, credential vault, allow/deny lists — enforced as code, not as suggestions.
3 Compliance Audit Trail async Every decision RSA-signed, timestamped, exportable. Maps to EU AI Act Art. 9, 13, 14 out of the box.

The semantic AI layer (Layer 2.5, optional, cloud-only) catches context-dependent attacks that regex can't see — but local deterministic policy is always the first and final word.


Works with any agent stack

SupraWall is framework-agnostic. It wraps the tool boundary, which every agent has.

Framework Status Plugin
Paperclip ✅ First-class packages/paperclip-plugin
LangChain (Py + TS) ✅ First-class Built into core SDK
CrewAI ✅ First-class Built into core SDK
AutoGen ✅ First-class Built into core SDK
Claude Code / OpenClaw ✅ Via MCP suprawall-mcp-plugin
Vercel AI SDK ✅ First-class Built into core SDK
Custom / homegrown One-line secure_agent() wrapper

Languages: Python, TypeScript, Go, C#. More via the MCP plugin.


What it stops

Threat How SupraWall stops it EU AI Act
Credential theft Vault injects secrets at runtime. Agents never see real keys. Logs scrubbed in 5+ encodings. Art. 13
Runaway costs Hard per-agent budget caps, per-model token accounting, circuit breakers. Art. 9
Unauthorized actions Deterministic ALLOW/DENY policies block tool calls before execution. Art. 9
PII exposure Response scrubbing redacts SSN, CC, email, custom regex — across encodings. Art. 13
No audit trail RSA-signed logs with risk scores. Exportable as compliance evidence. Art. 13
No human oversight REQUIRE_APPROVAL pauses the agent and notifies a human before high-risk actions. Art. 14
Prompt-injection-driven actions Local policy ignores agent intent — only the tool-call signature matters. Art. 9

Built-in policy templates

npx suprawall init  # interactive policy bootstrap
Policy Protects against
langchain-safe.json rm -rf, .env reads, unwhitelisted shell
pii-protection.json SSN, CC, email exfiltration
eu-ai-act-audit.json Human-in-the-loop for high-risk tools
budget-guardrail.json Token + cost circuit breakers
paperclip-company.json Company-scoped budgets, role-based tool access

→ All starter policies · → Write your own


EU AI Act compliance — shipped, not promised

Enforcement begins August 2, 2026. Every Annex III high-risk sector needs a documented, enforceable risk management system (Art. 9), a tamper-proof audit trail (Art. 13), and human oversight (Art. 14). Most teams are going to scramble. You don't have to.

SupraWall ships 8 pre-built sector templates covering every Annex III high-risk category — plus a Banking & Finance template mapped to DORA. Each one is a real enforcement config with DENY rules, REQUIRE_APPROVAL gates, mandatory logging, and a conformity-assessment path built in.

Sector Annex III Risk level Conformity What it blocks out of the box
Biometrics Category 1 Critical Third-party Real-time ID in public spaces, emotion recognition without approval
Critical Infrastructure Category 2 Critical Self Physical-action tools without human confirm, unsafe disconnection
Education Category 3 High Self Autonomous admission rejections, scoring without explainability
HR & Employment Category 4 High Self Autonomous hire/fire, salary changes, performance reviews without sign-off
Healthcare Category 5 Critical Third-party Diagnosis without human review, PHI exfiltration, unlogged patient actions
Law Enforcement Category 6 Critical Third-party Predictive policing outputs without review, autonomous evidence decisions
Migration & Border Category 7 High Self Automated visa denials, risk-scoring without human
Justice & Democracy Category 8 High Self Autonomous judicial outputs, election-related agent actions
Banking & Finance (DORA) High Self Autonomous trading, unlogged client-facing decisions

Apply a sector template in one line:

from suprawall import secure_agent
agent = secure_agent(build_agent(), template="hr-employment")

Every template includes the baseline controls every Annex III system needs (risk management log, data-quality gate, human oversight hook, post-market monitoring, incident reporting) and layers sector-specific overrides on top. Every policy decision is RSA-signed and exportable as compliance evidence — the kind your auditor will actually accept.

Why this matters right now: August 2, 2026 is months away. The penalty for non-compliance is up to €35M or 7% of global turnover. If your auditor asks "what stopped the agent from terminating an employee autonomously?" — you hand them signed log entry #A-00847. Most teams will be hand-waving. You'll be compliant by pip install.

→ Full EU AI Act compliance guide · → Sector templates source


Self-host or cloud

SupraWall is fully open source under Apache 2.0 — clone it, run it, ship it.

Open Source (Self-Hosted) Cloud
Layer 1 deterministic engine ✅ Free forever
All built-in policies ✅ Free forever
RSA-signed audit log ✅ Free forever
Layer 2.5 semantic AI detection
Hosted dashboard + multi-tenant
Compliance report generation
SLA + support
# Self-host the dashboard
docker compose up

→ Deploy on cloud · → AWS Marketplace


Why "deterministic" matters

The dominant security pattern for agents today is another LLM judging the first LLM (guardrail models, classifier filters, etc.). That works ~80% of the time, fails silently the other 20%, costs tokens on every call, and produces unauditable decisions.

SupraWall takes the opposite stance: rules belong in code, not in prompts. A deterministic policy either matches or it doesn't. The decision is reproducible, the latency is constant, the audit trail is real, and there's no prompt you can write to talk it out of doing its job.

If you want probabilistic content moderation, use a guardrail model. If you want to stop your agent from wiring funds to the wrong account, use a deterministic perimeter.


Star history

Star History Chart

If SupraWall saved you from an incident, please ⭐ the repo — it's how this kind of infrastructure finds the people who need it.


Contributing

We're a small team (Wiser Automation) and we want SupraWall to be a community-owned standard, not a single-vendor tool.

Active issues good for first-time contributors are tagged good first issue.


Links

Website · Docs · Cloud · Blog · X / @The_real_Peghin · License: Apache 2.0

Built by Wiser Automation · Made for the zero-human company era.

About

The open-source security layer for AI agents. Deterministic guardrails, PII redaction, and EU AI Act compliance in one line of code.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors