Submission: google/adk-python#4543 Status: ✅ Implemented — ADK GovernanceAdapter shipped (packages/agentmesh-integrations/adk-agentmesh/). PolicyEvaluator aligned with ADK protocol. Issue #4543 open on google/adk-python. Type: Feature request (BasePlugin implementation) Date Submitted: March 2, 2026
Proposal to contribute a GovernancePlugin for Google's Agent Development Kit (ADK) that provides policy-based access control, threat detection, and audit trails. The plugin leverages ADK's existing BasePlugin hook architecture — no framework changes needed.
ADK's plugin architecture (BasePlugin) has all the right hooks for governance enforcement — before_tool_callback, before_agent_callback, on_user_message_callback — but there's no built-in governance plugin. The existing plugins cover analytics (BigQuery), logging, context filtering, and retry, but nothing for policy-based access control, threat detection, or audit trails.
Enterprise teams building multi-agent systems need to enforce who can call what tools, detect dangerous prompts before they reach agents, and maintain compliance-grade audit logs.
A BasePlugin implementation providing four capabilities:
- Allowlist/blocklist tools per policy
- Block on content patterns (credentials, PII)
- Enforce rate limits per agent/tool combination
- Scan user messages for data exfiltration signals
- Detect privilege escalation attempts
- Identify prompt injection patterns
- Block system destruction signals
- All scanning happens before messages reach the agent
- Verify trust scores before allowing agent delegation
- Enforce trust thresholds in multi-agent systems
- DID-based identity verification (optional)
- Append-only log of all governance decisions
- SHA-256 hash chain for tamper evidence
- JSON Lines format for log aggregation compatibility
from google.adk.runners import Runner
from governance_plugin import GovernancePlugin, GovernancePolicy
policy = GovernancePolicy(
name="production",
allowed_tools=["search_docs", "query_db", "create_ticket"],
blocked_tools=["shell_exec", "delete_records"],
blocked_patterns=[r"(?i)(api[_-]?key|password)\s*[:=]"],
max_calls_per_request=25,
require_human_approval=["create_ticket"],
)
runner = Runner(
agent=root_agent,
plugins=[GovernancePlugin(policy=policy)],
)| Decision | Approach | Rationale |
|---|---|---|
| Policy source | YAML/JSON config files | Policies change without deploys |
| Composition | Most-restrictive-wins merging | Org → Team → Agent layering |
| Fail mode | Closed (deny on error) | Safety-first for production |
| Audit format | JSON Lines | Compatible with log aggregation |
| Threat detection | Regex pattern matching | Deterministic, auditable, no LLM dependency |
| Existing | Limitation |
|---|---|
safety-plugins |
Focuses on Google Model Armor (cloud-dependent content safety) |
policy-as-code |
Focuses on infrastructure policy checking (Terraform/OPA) |
| This proposal | Runtime tool-level governance — controlling what agents can do within execution, independent of cloud services |
This pattern has been validated across multiple frameworks:
| Framework | Package | Tests |
|---|---|---|
| PydanticAI | pydantic-ai-governance | 57 |
| CrewAI | crewai-agentmesh | — |
| Microsoft Agent Framework | MAF middleware adapter | 18 |
| Mastra | @agentmesh/mastra | 19 |
| Agent OS (core) | agent-os | 1,327 |
The GovernancePlugin covers 10/10 OWASP Agentic Top 10 risks through ADK's native hooks:
before_tool_callback→ ASI-01 (Hijacking), ASI-02 (Excessive Capabilities), ASI-06 (Confused Deputy)on_user_message_callback→ ASI-01 (Hijacking), ASI-05 (Insecure Output)before_agent_callback→ ASI-03 (Insecure Communication), ASI-07 (Identity Spoofing)after_tool_callback→ ASI-09 (Missing Audit Trails)