Skip to content

syrin-labs/syrin-python

Repository files navigation

Syrin

Syrin

Agents that ship. Not surprise you with bills.

Python library for AI agents with budget control, memory, and observability built-in.

PyPI License Python Stars

Website Β· Docs Β· Discord Β· Twitter

πŸš€ Installation

# Install Syrin with OpenAI support (default)
pip install syrin

# Install with Anthropic support
pip install syrin[anthropic]

# Install with voice capabilities
pip install syrin[voice]

🎯 The Problem: "Why Did My AI Agent Cost $10,000 Last Month?"

You built an AI agent. It worked perfectly in testing. Then came the bill β€” a surprise invoice for thousands of dollars with zero warning.

This is the #1 reason AI agents never make it to production. Not because they don't work β€” because they're financially reckless.

What developers tell us:

"I had no idea when my agent hit the budget." "My logs don't show where tokens went." "I spent 3 weeks building memory from scratch." "My agent crashed after 2 hours β€” no way to resume." "I needed 8 libraries just to make one agent."

Syrin solves this. One library. Zero surprises. Production-ready from day one.


πŸš€ 60-Second Quickstart

pip install syrin
from syrin import Agent, Model, Budget, stop_on_exceeded

class Assistant(Agent):
    model = Model.Almock()  # No API key needed
    budget = Budget(run=0.50, on_exceeded=stop_on_exceeded)

result = Assistant().response("Explain quantum computing simply")
print(result.content)
# Cost: $0.0012  |  Budget used: $0.0012

You now have:

  • βœ… Budget cap at $0.50 (stops automatically)
  • βœ… Cost tracking per response
  • βœ… Token usage breakdown
  • βœ… Full observability built-in

🎯 Syrin Use Cases

Syrin is built to solve the hard parts of building production AI agents. Here’s how it handles specific challenges:

1. Context Creation & Management

The Problem: Agents run out of context window or feed irrelevant history into the LLM. Syrin's Solution: Automatic token counting, window management, and dynamic context injection.

from syrin import Agent, Context
from syrin.threshold import ContextThreshold

agent = Agent(
    model=Model.Almock(),
    context=Context(
        max_tokens=80000,
        # Automatically compact when context is 75% full
        thresholds=[
            ContextThreshold(at=75, action=lambda ctx: ctx.compact()),
        ],
        # Or proactively compact at 60% to prevent rot
        auto_compact_at=0.6,
    ),
)

Features:

  • Token counting with model-specific encodings
  • Compaction strategies (middle-out truncation, summarization)
  • Dynamic injection for RAG or runtime data
  • Snapshot view to debug exactly what the LLM sees

2. Memory & Knowledge Pool

The Problem: Agents forget everything between sessions. Syrin's Solution: First-class persistent memory with 4 specialized types and decay curves.

from syrin import Agent
from syrin.memory import Memory
from syrin.enums import MemoryType

agent = Agent(
    model=Model.Almock(),
    memory=Memory(
        types=[MemoryType.CORE, MemoryType.EPISODIC, MemoryType.SEMANTIC],
        top_k=10,  # Retrieve top 10 relevant memories
    ),
)

# Remember facts (persisted across sessions)
agent.remember("User prefers TypeScript", memory_type=MemoryType.CORE)

# Recall later (semantic search)
memories = agent.recall("user preferences")

Memory Types:

  • Core β€” Long-term facts (user profile, preferences)
  • Episodic β€” Conversation history and events
  • Semantic β€” Knowledge chunks with embeddings (RAG)
  • Procedural β€” Skills and instructions

Backends: SQLite (default), Qdrant (vector search), Redis (cache), PostgreSQL (production).


3. Observability Built In

The Problem: "What happened?" β€” no visibility into agent decisions. Syrin's Solution: Two ways to see everything: programmatic hooks and CLI tracing.

Method 1: Programmatic Hooks (debug=True)

agent = Agent(
    model=Model.Almock(),
    debug=True,  # Console output for every lifecycle event
)

# Or subscribe to specific events
agent.events.on("llm.request_start", lambda ctx: print(f"LLM call #{ctx.iteration}"))
agent.events.on("budget.threshold", lambda ctx: print(f"Budget at {ctx.percentage}%"))

Method 2: CLI Tracing (--trace)

Run your agent script with the --trace flag for full observability without code changes:

# Enable full tracing
python my_agent.py --trace

What you get:

  • LLM request/response logs
  • Tool execution traces
  • Budget usage per call
  • Memory operations (store/recall)
  • Token counts and context utilization

πŸ”§ Syrin's Power

πŸŽ›οΈ Budget & Cost Control (Your #1 Problem Solved)

The Problem: Agents run wild, you get surprise bills Syrin's Solution: Built-in budget control with automatic stops

# Per-run budget cap
agent = Agent(
    model=Model.OpenAI("gpt-4o-mini", api_key="..."),
    budget=Budget(run=0.50, on_exceeded=stop_on_exceeded),
)

# Budget thresholds (warn at 70%, switch model at 90%)
agent = Agent(
    budget=Budget(
        run=1.00,
        thresholds=[
            BudgetThreshold(at=70, action=lambda ctx: print("⚠️ 70% budget")),
            BudgetThreshold(at=90, action=lambda ctx: ctx.parent.switch_model("gpt-4o-mini")),
        ],
    ),
)

# Rate limiting
agent = Agent(
    budget=Budget(rate_limit=RateLimit(requests=10, window=60)),  # 10 req/min
)

Result: No surprise bills. Ever.


πŸ€– Multi-Agent Orchestration (Teams of Agents)

The Problem: Building multi-agent systems is complex Syrin's Solution: Simple primitives for powerful orchestration

from syrin import Agent, Model, DynamicPipeline

class Researcher(Agent):
    model = Model.Almock()
    system_prompt = "You research topics."

class Writer(Agent):
    model = Model.Almock()
    system_prompt = "You write reports."

# LLM decides which agents to spawn
pipeline = DynamicPipeline(agents=[Researcher, Writer], model=Model.Almock())
result = pipeline.run("Research AI trends and write a summary")
print(result.content, f"${result.cost:.4f}")

# Or manually:
researcher = Researcher()
result = researcher.handoff(Writer, "Write article from research", transfer_context=True)

Multi-Agent Patterns:

  • Handoff β€” Route to specialist agents
  • Spawn β€” Create sub-agents for subtasks
  • DynamicPipeline β€” LLM orchestrates agent selection
  • Parallel execution β€” Run multiple agents simultaneously

πŸ›‘οΈ Guardrails & Safety (Input/Output Validation)

The Problem: Agents produce harmful or incorrect output Syrin's Solution: Built-in guardrails with automatic blocking

from syrin import Agent, Model, GuardrailChain
from syrin.guardrails import LengthGuardrail, ContentFilter

class SafeAgent(Agent):
    model = Model.Almock()
    guardrails = GuardrailChain([
        LengthGuardrail(max_length=4000),
        ContentFilter(blocked_words=["spam", "malicious"]),
    ])

result = SafeAgent().response("User input")
print(result.report.guardrail.passed)   # True/False
print(result.report.guardrail.blocked)  # True if blocked

Guardrail Types:

  • Length β€” Max input/output length
  • ContentFilter β€” Block harmful words
  • PII Detection β€” Detect personal information
  • Custom β€” Your validation logic

πŸ”Œ Production API & Serving (Ship to Production)

The Problem: "How do I serve this to users?" Syrin's Solution: One-line HTTP API + built-in playground

agent = Assistant()
agent.serve(port=8000, enable_playground=True, debug=True)
# Visit http://localhost:8000/playground

Features:

  • βœ… HTTP API (POST /chat, POST /stream)
  • βœ… Web playground (chat UI with cost display)
  • βœ… Real-time observability panel
  • βœ… Multi-agent support (agent selector)
  • βœ… MCP server integration

πŸ”„ Lifecycle & Hooks (Full Control)

The Problem: Need to run custom logic at specific points Syrin's Solution: 72+ hooks for every lifecycle event

Event When It Fires
LLM_REQUEST_START Before LLM call
TOOL_CALL_START Before tool execution
BUDGET_THRESHOLD Budget threshold reached
CHECKPOINT_SAVED State saved
CIRCUIT_TRIP Circuit breaker opens
HANDOFF_START Agent hands off work
SPAWN_START Sub-agent created
... 60+ more events

πŸ”Œ Remote Configuration (Control From Anywhere)

The Problem: "I need to change agent config without redeploying" Syrin's Solution: Built-in remote configuration server

from syrin import Agent, configure

# Configure agent remotely
configure(
    agent_id="my-agent",
    endpoint="https://config.syrin.ai",
    polling_interval=60,  # Check for updates every 60 seconds
)

agent = Agent(model=Model.OpenAI("gpt-4o-mini"))
agent.serve(port=8000)

Features:

  • βœ… Change config without redeploying
  • βœ… A/B testing support
  • βœ… Feature flags
  • βœ… Dynamic model switching

🎯 Why Developers Choose Syrin

Feature Syrin "Others"
Budget control βœ… Built-in, declarative ❌ DIY or missing
Cost tracking βœ… Every response ❌ Guesswork
Agent memory βœ… 4 types, auto-managed ❌ Manual setup
Observability βœ… 72+ hooks, full traces ❌ Add-on tools
Multi-agent βœ… Handoff, spawn, pipeline ❌ Complex orchestration
Type-safe βœ… StrEnum, mypy strict ❌ String hell
Production API βœ… One-line serve ❌ Build Flask wrapper
Remote config βœ… Built-in ❌ DIY
Circuit breaking βœ… Built-in ❌ External library
Checkpoints βœ… State persistence ❌ DIY

🎯 Real Projects Built with Syrin

πŸŽ™οΈ Voice AI Recruiter (examples/resume_agent)

A voice agent that handles recruiter calls using Syrin + Pipecat.

Features:

  • Per-call budget limits ($0.50/call)
  • Memory across conversations
  • Real-time observability
  • Cost tracking per call

Try it:

cd examples/resume_agent
python voice_server.py

πŸ“Š Financial Analysis Agent

Processes financial reports with tool calling, memory, and budget constraints.

πŸ” Research Assistant

Multi-agent system that researches topics and writes reports with full cost control.


πŸ“š Documentation

Resource Description
Getting Started 5-minute guide to your first agent
Examples Runnable code for every use case
API Reference Complete API documentation
Architecture How Syrin works under the hood
Budget Control Deep dive into budget features
Memory Memory systems and backends
Multi-Agent Handoff, spawn, DynamicPipeline

⭐ Why Star This Repo?

We're building the agent library we wish existed: production-ready, financially safe, and actually observable.

Every star tells us this matters. It helps us prioritize features and shows the community that agents don't have to be black boxes.

Star Syrin if you want:

  • βœ… Agents that don't surprise you with bills
  • βœ… One library instead of 10 glued together
  • βœ… Built-in observability (no more log scraping)
  • βœ… Memory that actually works
  • βœ… Multi-agent orchestration that's simple

Star Syrin on GitHub


🌐 Community


🀝 Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.


πŸ“„ License

MIT License β€” see LICENSE for details.


Agents that ship. Not surprise you with bills.

About

Developer-first Python framework for AI agents with built-in budget control, context, memory and observability.

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors