Skip to content

GovOps MVP 2

Michael Schwartz edited this page Dec 3, 2025 · 13 revisions

Agama GovOps — Minimal Viable Product (MVP)

Introduction

The Agama GovOps MVP delivers governance at operational speed through declarative artifacts, formal safety checks, and live compliance telemetry. This minimal viable product validates the full GovOps loop on a minimal environment consisting of 1 AI agent (running Cedarling locally), 1 MCP server (tool/proxy), and 1 OpenSearch cluster (data plane target). The system operationalizes governance by shifting from periodic audit-driven compliance to continuous, policy-driven, real-time authorization enforcement.

Glossary

  • GovOps System: The complete Agama GovOps platform including frontend services and governed runtime components
  • Cedarling PDP: Cedar-based Policy Decision Point embedded in governed components for real-time authorization
  • Lock Server: Central decision and evidence collector that aggregates authorization telemetry
  • Policy Authoring Service: Agama Lab Frontend service for creating and analyzing Cedar policies
  • Schema Management Service: Agama Lab Frontend service for managing entity, action, and token schemas
  • Federation Management Service: Agama Lab Frontend service for configuring trusted issuers and claim mappings
  • Compliance Service: Agama Lab Frontend service for viewing decision streams and exporting evidence
  • AI Agent: Governed runtime component that executes actions subject to Cedar policy evaluation
  • MCP Server: Model Context Protocol server/proxy that mediates tool calls subject to Cedar policy evaluation
  • OpenSearch Cluster: Data plane target for governed read/write operations
  • Cedar Policy: Declarative authorization rule written in Cedar policy language
  • Formal Analysis: Automated verification of Cedar policies for conflicts, unreachable statements, and unsafe patterns
  • Trusted Issuer: JWT token issuer explicitly configured as authorized to mint tokens for governed components
  • Claim Mapping: Rules that bind JWT claims to Cedar entities for authorization evaluation
  • Decision Log: Record of a single authorization decision including policy version, issuer, permit/deny outcome, and context
  • OSCAL: Open Security Controls Assessment Language for compliance evidence mapping
  • OESR: Operational Enforcement Success Rate - percentage of governed actions returning valid permit or deny (not error)

This MVP is the smallest coherent GovOps slice that still proves the category value: governance at operational speed through declarative artifacts, formal safety checks, and live compliance telemetry. It validates the full GovOps loop on a minimal environment:

  • 1 AI agent (running Cedarling locally)
  • 1 MCP server (tool / proxy)
  • 1 OpenSearch cluster (data plane target)
graph TB
  subgraph AgamaLab["Agama Lab (GovOps Platform)"]
    subgraph Frontend["Frontend Services"]
      PA["Policy Authoring & Analysis<br/>Write It Right"]
      SM["Schema Management<br/>Validate to Navigate"]
      FM["Federation Management<br/>Trust is a Must"]
      CC["Continuous Compliance<br/>Audit 'til You've Caught It"]
      DB["GovOps Dashboard"]
    end
    
    subgraph Backend["Backend Services"]
      Hub["Hub Sysetm<br/>(Aggregates & Stores Decision Logs)"]
    end
  end

  subgraph Runtime["Governed Runtime (MVP Testbed)"]
    Agent["AI Agent + Cedarling PDP"]
    MCP["MCP Server/Proxy + Cedarling PDP"]
    OS["OpenSearch + Cedarling PDP"]
  end

  subgraph Network["Network Infrastructure"]
    LockServer["Lock Server<br/>(Policy Retrieval Point<br/>& Log Collector)"]
  end

  PA --> Agent
  SM --> Agent
  FM --> Agent
  PA --> MCP
  SM --> MCP
  FM --> MCP
  PA --> OS
  SM --> OS

  Agent -->|Decision Logs| LockServer
  MCP -->|Decision Logs| LockServer
  OS -->|Decision Logs| LockServer
  
  LockServer -->|Aggregated Logs| Hub
  Hub --> CC
  CC --> DB
Loading

MVP Feature Set (GovOps Core Services)

1. Policy Authoring & Analysis — “Write It Right”

Scope for MVP

  • Web UI to author Cedar policies that govern:

    • the AI agent’s actions,
    • the MCP server tool calls,
    • and OpenSearch access (index/query/write).
  • Real-time Cedar syntax validation.

  • Cedar formal analysis for:

    • conflicting rules,
    • unreachable statements,
    • unsafe allow/deny patterns.
  • “Deploy on green”: policies only release after analysis passes.

MVP Test

  • Create a small policy set for:

    1. agent may call MCP tool search only on approved topics,
    2. MCP server may only forward read queries to OpenSearch,
    3. agent may write to OpenSearch only to a safe index.
  • Formal analysis must flag one intentionally unsafe policy and block release.

Value: Turns governance into engineering: provable safety before runtime.


2. Schema Management — “Validate to Navigate”

Scope for MVP

  • Minimal schema registry to manage:

    • Cedar entity/action schema for agent, MCP, OpenSearch resources.
    • Token/claim schema for the MVP issuers (see Federation Management).
  • Schema validation in authoring UI so policies are always written against the true data shapes.

MVP Test

  • Register schemas for:

    • Agent, ToolCall, Index, Document, Query, actions like invokeTool, readIndex, writeIndex.
  • Attempt to author a policy referencing a non-existent action/resource → UI blocks save.

Value: Prevents policy drift caused by bad assumptions about real data / claims.


3. Federation Management — “Trust is a Must”

Scope for MVP

  • Trusted issuer editor that lets you declare:

    • which token issuers are accepted for the AI agent, MCP server, OpenSearch client.
  • Claim-to-entity mapping rules (simple UI or YAML) to bind JWT claims into Cedar entities.

MVP Test

  • Configure exactly two issuers:

    1. one for AI agent identity/attestation,
    2. one for MCP server / tool identity.
  • Send a token from an untrusted issuer → token validation fails

  • Send a revoked token → token validation fails

  • Send an expired token → schema validation fails

  • Send a trusted token missing required claim → schema validation fails.

Value: Makes multi-issuer trust explicit and testable even in a tiny environment.


4. Continuous Compliance (Minimal Operational Loop)

Scope for MVP

  • Lock Server receives batched decision logs from the following Cedarling instances:

    • AI agent,
    • MCP Proxy
    • MCP Service
    • OpenSearch
  • Minimal OSCAL component-definition builder:

    • map 1–3 controls → relevant policies.
  • Export evidence as CSV/JSON.

MVP Test

  • Run a scripted workload:

    • agent performs 20 actions,
    • 5 denied, 1 malformed token error, rest permitted.
  • Evidence export includes:

    • policy version,
    • issuer,
    • decision,
    • mapped control IDs.

Value: Compliance becomes live telemetry, not a quarterly scramble.


MVP User Stories (GovOps-aligned)

User Story 1 — Write It Right (Author + Prove Policies)

As a GovOps engineer, I want to author Cedar policies and run formal analysis so unsafe or conflicting governance rules never reach the agent, MCP server, or OpenSearch.

Acceptance Criteria

  • Policy + schema editor in UI
  • Real-time syntax + schema validation
  • Formal analysis gates release (“deploy on green”)

User Story 2 — Validate to Navigate (Manage Schemas)

As a GovOps engineer, I want to register and version schemas for tokens, actions, and resources so policy meaning stays consistent as the environment changes.

Acceptance Criteria

  • Schema registry UI or repo-backed CRUD
  • Policy save blocked on schema mismatch
  • Versioned schemas tied to policy releases

User Story 3 — Trust is a Must (Configure Issuers + Mappings)

As a GovOps engineer, I want to define trusted issuers and claim mappings so only attested agent/MCP identities can exercise their capabilities.

Acceptance Criteria

  • Trusted issuer list per governed component
  • Claim-shape validation against token schema
  • Mapping rules bind claims → Cedar entities

User Story 4 — Real-Time Governance Enforcement

As a Governance Officer, I want the agent, MCP server, and OpenSearch access governed in real time by Cedarling so behavior is constrained at machine speed.

Acceptance Criteria

  • Cedarling evaluates each governed action
  • Permit/deny/error decisions logged locally
  • Cached enforcement works if Hub is offline

User Story 5 — Audit ’til You’ve Caught It (View Decisions + Evidence)

As a Compliance Manager, I want a live decision stream and exportable evidence mapped to controls so I can demonstrate continuous compliance.

Acceptance Criteria

  • Decision stream view with filters (agent / tool / index / action)
  • Error highlighting
  • Control→policy mapping UI
  • Evidence export (CSV/JSON)

MVP KPI (GovOps-style)

Operational Enforcement Success Rate (OESR)

Definition: Percentage of governed actions across the agent, MCP server, and OpenSearch that return a valid permit or deny (not error).

Why it matters:

  • Direct operational health signal
  • Detects policy defects, schema drift, issuer misconfig, or rollout faults
  • Simple enough for MVP, powerful enough for execs

MVP Dashboard (Minimal GovOps Console)

Real-Time Governance Loop Dashboard

Components

  1. Live Decision Stream

    • Unified timeline from: agent, MCP, OpenSearch
  2. Error Hotspots

    • Which component is generating errors and why (schema mismatch, issuer failure, policy gap)
  3. Top Capabilities Exercised

    • Most-used action/resource pairs (capability view)
  4. Filters

    • by component (agent / MCP / OpenSearch), action, resource
  5. Status Indicators

    • Policy version deployed vs latest
    • Trusted issuer set loaded (yes/no)
    • Hub ingestion health

Summary: Minimum Valuable Agama GovOps (MVP)

Included

  • Write It Right: Cedar policy authoring + formal analysis + gated release
  • Validate to Navigate: minimal schema registry for entities/actions/tokens
  • Trust is a Must: trusted issuer + claim-mapping management
  • Audit ’til You’ve Caught It: live decision stream + minimal OSCAL mapping + evidence export
  • A single KPI + a single dashboard
  • Proven against 1 AI agent, 1 MCP server, 1 OpenSearch DB

Clone this wiki locally