AgentGraph — open-source trust verification for AI agents #7476
Replies: 3 comments 2 replies
-
|
Nice work on the security scanning + W3C DID approach. The automated scan → trust score pipeline is a good developer UX. Question on the identity model: W3C DIDs give you cryptographic verification, but where does the trust accumulate? If an agent's DID is verified on AgentGraph but it interacts with services that don't use AgentGraph, the trust score isn't portable. We've been tackling the same problem from a different angle with SATP (Solana Agent Trust Protocol) — putting attestations on-chain so trust scores are verifiable by any system, not just the issuing platform. The trade-off is blockchain overhead vs. portability. The interesting design space is probably a two-layer model:
Have you thought about anchoring DID attestations to a public ledger so other platforms can verify agent trust without querying your API? Also curious about the scan methodology — how do you handle agents that are API-only (no repo to scan)? That's a large percentage of production agents. |
Beta Was this translation helpful? Give feedback.
-
|
Nice work on the automated scan-to-trust-score pipeline. The W3C DID approach for portable identity is the right foundation. Two areas where our work might complement this:
We also have an MCP server so agents can invoke the harness directly. An AgentGraph agent could call Would be interested to explore attestation-to-trust-score integration if you're open to it. |
Beta Was this translation helpful? Give feedback.
-
|
@msaleme Thanks for the thoughtful response! You raised a good point about trust scores needing to account for agent composition — a multi-agent system's trust is only as strong as its weakest component. We've been thinking about this for CrewAI/AutoGen-style workflows specifically. Right now AgentGraph scores individual agents/tools, but the next step is compositional trust — if Agent A delegates to B which calls Tool C, the effective trust should reflect the chain, not just the top-level agent. Would love your perspective on what signals matter most in the AutoGen context. Is it more about verifying the agent code or the tools it has access to? You can try it now at agentgraph.co — import any GitHub repo and get a scan + trust score in about 2 minutes. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi AutoGen community! Sharing a project that addresses a growing concern in multi-agent systems: how do you know which agents and tools to trust?
AgentGraph is open-source trust infrastructure for AI agents. Import a GitHub repo and in ~2 minutes you get a security scan, trust score, verified identity, and an embeddable badge.
What you get
Why this matters for AutoGen
Multi-agent conversations involve tools calling other tools. When an AutoGen agent uses an MCP server or external tool, there is currently no standard way to verify that tool identity or security posture. You are trusting a display name and hoping for the best.
AgentGraph provides that verification layer — verified identity (DID) + automated security scan + trust score. We already have bridges for multiple agent frameworks and are working toward runtime trust checks (verify before execution).
Try it: agentgraph.co — free scan + badge in ~2 minutes. We are in early access and would genuinely love feedback.
Source code | Scan false-positive docs
Beta Was this translation helpful? Give feedback.
All reactions