Skip to content

Limo-Labs/Limo-Deity

Repository files navigation

Deity

Build type-safe AI agents with TSX syntax. One dependency. Full control.

License: MIT npm version

About

Deity is a TypeScript framework for building AI agents as declarative TSX components. Instead of wiring up imperative loops, you describe what your agent does — prompts, validation, retry logic, tool use — and the runtime handles execution.

It ships with a single runtime dependency (zod) and provides workflow primitives for sequencing, branching, parallelism, and iteration — all with full type inference.

Features

  • TSX-first API — Define agents using JSX/TSX with a custom JSX runtime
  • Workflow orchestration — Compose agents with Sequence, Parallel, Conditional, Loop, and ForEach
  • Built-in validation — Zod schemas for inputs and outputs, plus semantic validation rules
  • Automatic retry — Configurable retry with error feedback injected back to the LLM
  • Tool calling — Declare tools with Zod schemas; the runtime handles the call loop
  • Observation hooks — Inspect LLM results (tool calls, token usage) before extracting output
  • LLM-free testingtestFullAgent() runs the full pipeline without an LLM
  • Minimal footprint — Single dependency: zod

Packages

Package Description
@limo-labs/deity Core framework — agents, workflows, compiler, runtime
@limo-labs/deity-tools Built-in tools (file, terminal, memory, web, context)
@limo-labs/deity-adapter-copilot GitHub Copilot SDK adapter

Installation

npm install @limo-labs/deity

Requirements

  • Node.js ≥ 18
  • TypeScript ≥ 5.3 (for TSX support)

TypeScript Configuration

Add the custom JSX runtime to your tsconfig.json:

{
  "compilerOptions": {
    "jsx": "react-jsx",
    "jsxImportSource": "@limo-labs/deity"
  }
}

Quick Start

Define an Agent

import { Agent, Prompt, System, User, Result, Retry } from '@limo-labs/deity';
import { z } from 'zod';

const Summarizer = (
  <Agent
    id="summarizer"
    input={z.object({ text: z.string() })}
    output={z.object({ summary: z.string() })}
  >
    <Prompt>
      <System>You are a summarization expert. Respond with JSON.</System>
      <User>{(ctx) => `Summarize this text:\n\n${ctx.inputs.text}`}</User>
    </Prompt>

    <Result>
      {(ctx, llmResult) => ({
        summary: JSON.parse(llmResult.content).summary,
      })}
    </Result>

    <Retry maxAttempts={3} feedbackOnError={true} />
  </Agent>
);

Add Validation

import { Agent, Prompt, System, User, Result, Validate, Retry } from '@limo-labs/deity';

const StrictSummarizer = (
  <Agent
    id="strict-summarizer"
    input={z.object({ text: z.string() })}
    output={z.object({ summary: z.string(), wordCount: z.number() })}
  >
    <Prompt>
      <System>Summarize in under 50 words. Return JSON with "summary" and "wordCount".</System>
      <User>{(ctx) => ctx.inputs.text}</User>
    </Prompt>

    <Result>
      {(ctx, llmResult) => JSON.parse(llmResult.content)}
    </Result>

    <Validate>
      {(output) => ({
        rules: [
          {
            check: output.wordCount <= 50,
            error: `Summary is ${output.wordCount} words, must be ≤ 50`,
          },
        ],
      })}
    </Validate>

    <Retry maxAttempts={3} feedbackOnError={true} />
  </Agent>
);

Add Tools

import { Agent, Prompt, System, User, Result, Tools, ToolDef } from '@limo-labs/deity';

const FileAnalyzer = (
  <Agent
    id="file-analyzer"
    input={z.object({ directory: z.string() })}
    output={z.object({ files: z.array(z.string()) })}
  >
    <Tools>
      <ToolDef
        name="list_files"
        description="List files in a directory"
        input={z.object({ path: z.string() })}
        execute={async (input) => {
          const fs = await import('node:fs/promises');
          return { files: await fs.readdir(input.path) };
        }}
      />
    </Tools>

    <Prompt>
      <System>Use list_files to find all TypeScript files.</System>
      <User>{(ctx) => `Scan directory: ${ctx.inputs.directory}`}</User>
    </Prompt>

    <Result>
      {(ctx, llmResult) => ({
        files: llmResult.toolCalls
          ?.filter((tc) => tc.name === 'list_files')
          .flatMap((tc) => tc.result?.files ?? []) ?? [],
      })}
    </Result>
  </Agent>
);

Compose a Workflow

import {
  Workflow, Sequence, Parallel, Conditional, Loop,
  Agent, Prompt, System, User, Result, Validate, Retry,
  runTSXWorkflow,
} from '@limo-labs/deity';

const workflow = Workflow({
  name: 'code-review',
  defaultModel: { adapter: myLLMAdapter },
  children: Sequence({
    children: [
      // Step 1: Classify complexity
      ClassifierAgent,

      // Step 2: Branch on result
      Conditional({
        condition: (ctx) => ctx.getOutput('classifier').isComplex,
        children: [
          // Complex → parallel deep analysis
          Parallel({
            children: [SecurityAnalyzer, PerformanceAnalyzer, QualityAnalyzer],
          }),
          // Simple → quick check
          QuickReviewAgent,
        ],
      }),

      // Step 3: Generate report
      ReportAgent,
    ],
  }),
});

const result = await runTSXWorkflow(workflow, { code: sourceCode });

Workflow Primitives

Primitive Purpose Example
Sequence Run agents in order Sequence({ children: [A, B, C] })
Parallel Run agents concurrently Parallel({ children: [A, B, C] })
Conditional Branch on a condition Conditional({ condition: fn, children: [ifTrue, ifFalse] })
Loop Repeat N times Loop({ iterations: 3, children: Agent })
ForEach Iterate over an array ForEach({ items: fn, children: Agent })

Testing Without an LLM

Deity includes testFullAgent() to validate the full agent pipeline — prompt construction, result extraction, and validation — without making LLM calls:

import { testFullAgent } from '@limo-labs/deity';

const result = await testFullAgent(MyAgent, {
  inputs: { text: 'Hello world' },
  mockLLMResponse: { content: '{"summary": "A greeting"}' },
});

expect(result.output.summary).toBe('A greeting');

Architecture

@limo-labs/deity
├── components/     TSX components (Agent, Prompt, Tools, Sequence, etc.)
├── compiler/       TSX → AST → AgentComponent compilation
├── engine/         LLM loop, retry, workflow execution, JSON parsing
├── context/        ExecutionContext with optional enhancements
├── conversation/   Message history with automatic pruning
├── memory/         Tiered memory (core/detailed) with relevance scoring
├── session/        File-based session persistence
└── ui/             Event-based UI bridge

Execution Flow

TSX Definition
  → compile (lazy, on first execution)
  → build prompt (Message[])
  → LLM loop (with tool calling)
  → extract result
  → validate (Zod schema + semantic rules)
  → retry if invalid (with error feedback)
  → return typed output

Documentation

Contributing

Contributions are welcome! See CONTRIBUTING.md for setup instructions and coding standards.

git clone https://github.com/Limo-Labs/Limo-Deity.git
cd Limo-Deity
npm install
npm run build
npm test

Roadmap

  • Streaming LLM support
  • Advanced memory compression
  • Distributed execution
  • Plugin system
  • Visual workflow builder
  • Performance profiler

License

MIT © Limo Labs

About

No description or website provided.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Contributors

Languages