| name | agentmesh-governance | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| description | AI agent governance, trust scoring, and policy enforcement powered by AgentMesh. Activate when: (1) user wants to enforce token limits, tool restrictions, or content policies on agent actions, (2) checking an agent's trust score before delegation or collaboration, (3) verifying agent identity with Ed25519 cryptographic DIDs, (4) auditing agent actions with tamper-evident hash chain logs, (5) user asks about agent safety, governance, compliance, or trust. Enterprise-grade: 1,600+ tests, merged into Dify (65K★), LlamaIndex (47K★), Microsoft Agent-Lightning (15K★). | ||||||||||||
| version | 1.1.0 | ||||||||||||
| metadata |
|
Zero-trust governance layer for OpenClaw agents. Enforce policies, verify identities, score trust, and maintain tamper-evident audit logs — all from your agent's command line.
Install the AgentMesh governance CLI:
pip install agentmesh-governanceIf
agentmesh-governanceis not yet on PyPI, install directly from source:pip install "agentmesh @ git+https://github.com/imran-siddique/agent-mesh.git"
All scripts are in scripts/. They wrap the governance engine and output JSON results.
Evaluate an action against a governance policy before execution:
scripts/check-policy.sh --action "web_search" --tokens 1500 --policy policy.yamlReturns JSON with allowed: true/false, any violations, and recommendations.
Use this before executing any tool call to enforce limits.
Check an agent's current trust score (0.0 – 1.0):
scripts/trust-score.sh --agent "research-agent"Returns the composite trust score with breakdown across 5 dimensions: policy compliance, resource efficiency, output quality, security posture, collaboration health.
Verify an agent's Ed25519 cryptographic identity before trusting its output:
scripts/verify-identity.sh --did "did:mesh:abc123" --message "hello" --signature "base64sig"Returns verified: true/false. Use when receiving data from another agent.
Update trust scores after collaborating with another agent:
scripts/record-interaction.sh --agent "writer-agent" --outcome success
scripts/record-interaction.sh --agent "writer-agent" --outcome failure --severity 0.1Success adds +0.01 to trust score. Failure subtracts the severity value. Agents dropping below the minimum threshold (default 0.5) are auto-blocked.
View tamper-evident audit trail with hash chain verification:
scripts/audit-log.sh --last 20
scripts/audit-log.sh --agent "research-agent" --verifyThe --verify flag checks hash chain integrity — any tampering is detected.
Create a new Ed25519 cryptographic identity (DID) for your agent:
scripts/generate-identity.sh --name "my-agent" --capabilities "search,summarize,write"Returns your agent's DID, public key, and capability manifest.
Create a policy.yaml to define governance rules:
name: production-policy
max_tokens: 4096
max_tool_calls: 10
allowed_tools:
- web_search
- file_read
- summarize
blocked_tools:
- shell_exec
- file_delete
blocked_patterns:
- "rm -rf"
- "DROP TABLE"
- "BEGIN CERTIFICATE"
confidence_threshold: 0.7
require_human_approval: false- Before tool execution: Run
check-policy.shto enforce limits - Before trusting another agent's output: Run
verify-identity.sh - After collaboration: Run
record-interaction.shto update trust - Before delegation: Check
trust-score.sh— don't delegate to agents below 0.5 - For compliance: Run
audit-log.sh --verifyto prove execution integrity - On setup: Run
generate-identity.shto create your agent's DID
| Policy | Description |
|---|---|
| Token limits | Cap per-action and per-session token usage |
| Tool allowlists | Only explicitly permitted tools can execute |
| Tool blocklists | Dangerous tools are blocked regardless |
| Content patterns | Block regex patterns (secrets, destructive commands, PII) |
| Trust thresholds | Minimum trust score required for delegation |
| Human approval | Gate critical actions behind human confirmation |
This skill bridges the OpenClaw agent runtime with the AgentMesh governance engine:
OpenClaw Agent → SKILL.md scripts → AgentMesh Engine
├── GovernancePolicy (enforcement)
├── RewardService (5-dimension scoring)
├── AgentIdentity (Ed25519 DIDs)
└── AuditLog (tamper-evident Merkle chains)
Part of the Agent Governance Toolkit: AgentMesh · Agent OS · Agent SRE