Skip to content
/ ailib Public

WIP; since the package name was already taken on PyPI, we will be changing the name.

License

Notifications You must be signed in to change notification settings

kapuic/ailib

AILib

A simple, intuitive Python SDK for building LLM-powered applications with chains, agents, and tools.

Philosophy: Simplicity of Vercel AI SDK + Power of LangChain = AILib πŸš€

# This is all you need to get started!
from ailib import create_chain
chain = create_chain("Translate to {language}: {text}")
result = chain.run(language="Spanish", text="Hello world")

Features

  • 🌐 Multi-Provider Support: Seamlessly switch between OpenAI, Anthropic Claude, and more (πŸ†•)
  • πŸš€ Simple API: Inspired by Vercel AI SDK - minimal boilerplate, maximum productivity
  • πŸ”— Chains: Sequential prompt execution with fluent API
  • πŸ”„ Workflows: Advanced orchestration with conditional logic, loops, and parallel execution (πŸ†•)
  • πŸ€– Agents: ReAct-style autonomous agents with tool usage
  • πŸ› οΈ Tools: Easy tool creation with decorators and type safety
  • πŸ“ Templates: Powerful prompt templating system
  • πŸ’Ύ Sessions: Conversation state and memory management
  • πŸ”’ Type Safety: Full type hints and optional Pydantic validation
  • πŸ›‘οΈ Safety: Built-in content moderation and safety hooks
  • πŸ“Š Tracing: Comprehensive observability and debugging support
  • ⚑ Async Support: Both sync and async APIs

Installation

# Basic installation (includes OpenAI support)
pip install ailib

# Install specific LLM providers
pip install ailib[anthropic]      # For Claude support
pip install ailib[all-providers]  # Install all supported providers

# Development and testing
pip install ailib[dev,test]       # For development
pip install ailib[tracing]        # For advanced tracing

Development Setup

For development, clone the repository and install with development dependencies:

# Clone the repository
git clone https://github.com/kapuic/ailib.git
cd ailib

# Create virtual environment with uv (recommended)
uv venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate

# Install in development mode with all dependencies
uv pip install -e ".[dev,test]"

# Install pre-commit hooks
pre-commit install

# Run formatters and linters
make format  # Format code with black and isort
make lint    # Check code style

Tutorials

Comprehensive tutorials are available in the examples/tutorials/ directory:

  1. Setup and Installation - Getting started with AILib
  2. Basic LLM Completions - Making your first API calls
  3. Prompt Templates - Building dynamic prompts
  4. Prompt Builder - Constructing conversations programmatically
  5. Session Management - Managing conversation state
  6. Chains - Building sequential workflows
  7. Tools and Decorators - Creating reusable tools
  8. Agents - Building autonomous AI agents
  9. Advanced Features - Async, streaming, and optimization
  10. Real-World Examples - Complete applications

πŸ†• New Tutorials:

  • Workflows - Complex orchestration with conditional logic, loops, and parallel execution
  • Simplified API - Showcasing the new factory functions

Start with the Tutorial Index for a guided learning path.

Quick Start

The Simplest Way

from ailib import create_chain

# One line to create and run a chain!
chain = create_chain("Translate to French: {text}")
result = chain.run(text="Hello world")
print(result)  # "Bonjour le monde"

Creating Agents

from ailib import create_agent, tool

# Define a tool
@tool
def weather(city: str) -> str:
    """Get weather for a city."""
    return f"Sunny, 72Β°F in {city}"

# Create agent with tools
agent = create_agent("assistant", tools=[weather])
result = agent.run("What's the weather in Paris?")

Building Workflows πŸ†•

from ailib import create_workflow

# Create a smart workflow with logic
workflow = (
    create_workflow()
    .step("Analyze sentiment: {text}")
    .if_(lambda r: "positive" in r.lower())
    .then("Write a thank you note")
    .else_("Offer assistance and escalate")
)

result = workflow.run(text="Your product is amazing!")

When You Need More Control

from ailib import create_client, Prompt

# Only use explicit clients when you need specific control
client = create_client("gpt-3.5-turbo")  # OpenAI
# client = create_client("claude-3-opus-20240229")  # Anthropic

# Build prompts programmatically
prompt = Prompt()
prompt.add_system("You are a helpful assistant.")
prompt.add_user("What is the capital of France?")

response = client.complete(prompt.build())
print(response.content)

Multi-Provider Support πŸ†•

AILib supports 15+ LLM providers through OpenAI-compatible APIs and custom implementations:

from ailib import create_client, create_agent, list_providers

# Many providers work with just a base URL change!
client = create_client("gpt-4")  # OpenAI (default)
client = create_client("mistralai/Mixtral-8x7B-Instruct-v0.1")  # Together
client = create_client("llama-2-70b", provider="groq")  # Groq (fast inference)

# Local models
client = create_client(
    model="llama2",
    base_url="http://localhost:11434/v1"  # Ollama
)

# Create agents with any provider
agent = create_agent("assistant", model="gpt-4")
agent = create_agent("assistant", model="claude-3-opus-20240229")  # Anthropic
agent = create_agent("assistant", provider="together", model="llama-2-70b")

Supported Providers:

  • βœ… OpenAI - GPT-4, GPT-3.5
  • βœ… Anthropic - Claude 3 (Opus, Sonnet, Haiku)
  • βœ… Local - Ollama, LM Studio, llama.cpp
  • βœ… Groq - Fast inference for open models
  • βœ… Perplexity - Online models with web search
  • βœ… DeepSeek - DeepSeek-V2, DeepSeek-Coder
  • βœ… Together - Open models (Llama, Mixtral, etc.)
  • βœ… Anyscale - Scalable open model hosting
  • βœ… Fireworks - Fast open model inference
  • βœ… Moonshot - Kimi models
  • πŸ”„ More coming soon...

Using Chains - The Easy Way

from ailib import create_chain

# Create a chain with the simplified API - no client needed!
chain = create_chain(
    "You are a helpful assistant.",
    "What is the capital of {country}?",
    "What is the population?"
)

result = chain.run(country="France")
print(result)
Alternative: Using direct instantiation for more control
from ailib import Chain, OpenAIClient

client = OpenAIClient()

# Create a multi-step chain
chain = (Chain(client)
    .add_system("You are a helpful assistant.")
    .add_user("What is the capital of {country}?", name="capital")
    .add_user("What is the population of {capital}?")
)

result = chain.run(country="France")
print(result)

Creating Tools

from ailib import tool

@tool
def weather(city: str) -> str:
    """Get the weather for a city."""
    return f"The weather in {city} is sunny and 72Β°F"

@tool
def calculator(expression: str) -> float:
    """Evaluate a mathematical expression."""
    return eval(expression)

Using Agents - The Easy Way

from ailib import create_agent

# Create agent with the simplified API
agent = create_agent(
    "assistant",
    tools=[weather, calculator],
    model="gpt-4"
)

# Run agent
result = agent.run("What's the weather in Paris? Also, what's 15% of 85?")
print(result)
Alternative: Using direct instantiation for more control
from ailib import Agent, OpenAIClient

# Create agent with tools
client = OpenAIClient(model="gpt-4")
agent = Agent(llm=client)
agent.with_tools(weather, calculator)

# Run agent
result = agent.run("What's the weather in Paris? Also, what's 15% of 85?")
print(result)

Session Management - The Easy Way

from ailib import create_session, OpenAIClient

# Create session with validation
session = create_session(
    session_id="tutorial-001",
    metadata={"user": "student"}
)

client = OpenAIClient()

# Add messages
session.add_system_message("You are a helpful tutor.")
session.add_user_message("Explain quantum computing")

# Get response with context
response = client.complete(session.get_messages())
session.add_assistant_message(response.content)

# Store memory
session.set_memory("topic", "quantum computing")
session.set_memory("level", "beginner")

Why AILib?

AILib follows the philosophy of Vercel AI SDK rather than LangChain:

  • Simple by default: Start with one line of code, not pages of configuration
  • Progressive disclosure: Complexity is available when you need it, hidden when you don't
  • Multi-provider: Switch between OpenAI, Anthropic, and more with a single parameter
  • Type-safe: Full TypeScript-style type hints and optional runtime validation
  • Production-ready: Built-in safety, tracing, and error handling
# LangChain style (verbose)
from langchain import LLMChain, PromptTemplate
from langchain.llms import OpenAI

llm = OpenAI(temperature=0.7)
prompt = PromptTemplate(
    input_variables=["product"],
    template="What is a good name for a company that makes {product}?"
)
chain = LLMChain(llm=llm, prompt=prompt)
result = chain.run("colorful socks")

# AILib style (simple)
from ailib import create_chain

chain = create_chain("What is a good name for a company that makes {product}?")
result = chain.run(product="colorful socks")

Core Concepts

LLM Clients

The SDK provides an abstract LLMClient interface with implementations for different providers:

  • OpenAIClient: OpenAI GPT models (GPT-4, GPT-3.5-turbo, etc.)
  • Easy to extend with custom implementations

Prompt Templates

Templates support variable substitution and partial formatting:

from ailib import PromptTemplate

template = PromptTemplate("Translate '{text}' to {language}")
result = template.format(text="Hello", language="French")

# Partial templates
partial = template.partial(language="Spanish")
result = partial.format(text="Goodbye")

Chains

Chains allow sequential execution of prompts with context passing:

chain = (Chain(client)
    .add_user("Generate a random number", name="number")
    .add_user("Double {number}", processor=lambda x: int(x) * 2)
)

Tools and Agents

Tools are functions that agents can use. The @tool decorator automatically:

  • Extracts function documentation
  • Infers parameter types
  • Handles validation automatically
@tool
def search(query: str, max_results: int = 5) -> str:
    """Search the web for information."""
    # Implementation
    return results

Advanced Features

Safety and Moderation

AILib includes built-in safety features to ensure responsible AI usage:

from ailib.safety import enable_safety, with_moderation

# Enable global safety checks
enable_safety(
    block_harmful=True,
    max_length=4000,
    blocked_words=["violence", "hate"]
)

# Use with OpenAI moderation
pre_hook, post_hook = with_moderation()

# Check content directly
from ailib.safety import check_content
is_safe, violations = check_content("Some text to check")

Tracing and Observability

Comprehensive tracing support for debugging and monitoring:

from ailib.tracing import get_trace_manager

# Automatic tracing for agents and chains
agent = create_agent("assistant", verbose=True)
result = agent.run("Complex task")  # Automatically traced

# Access trace data
manager = get_trace_manager()
trace = manager.get_trace(trace_id)
print(trace.to_dict())  # Full execution history

Async Support

All main components support async operations:

async def example():
    response = await client.acomplete(messages)
    result = await chain.arun(context="value")
    answer = await agent.arun("Task description")

Custom Processors

Add processing functions to chain steps:

def extract_number(text: str) -> int:
    import re
    match = re.search(r'\d+', text)
    return int(match.group()) if match else 0

chain.add_user("How many apples?", processor=extract_number)

Tool Registry

Manage tools programmatically:

from ailib import ToolRegistry

registry = ToolRegistry()
registry.register(my_tool)

# Use with agent
agent = create_agent("assistant", tools=registry)

Rate Limiting

Built-in rate limiting to prevent abuse:

from ailib.safety import set_rate_limit, check_rate_limit

# Set rate limit: 10 requests per minute per user
set_rate_limit(max_requests=10, window_seconds=60)

# Check before making requests
if check_rate_limit("user-123"):
    result = agent.run("Query")
else:
    print("Rate limit exceeded")

Factory Functions vs Direct Instantiation

AILib provides two ways to create objects:

  1. Factory Functions (Recommended): Simple, validated, and safe

    agent = create_agent("assistant", temperature=0.7)
    chain = create_chain("Prompt template")
    session = create_session(max_messages=100)
  2. Direct Instantiation: More control, no validation

    agent = Agent(llm=client, temperature=5.0)  # No validation!

Use factory functions for safety, direct instantiation for flexibility.

Best Practices

  1. Use environment variables for API keys:

    export OPENAI_API_KEY="your-key"
  2. Enable verbose mode for debugging:

    # With factory functions
    agent = create_agent("assistant", verbose=True)
    chain = create_chain("Template", verbose=True)
    
    # Or with fluent API
    chain.verbose(True)
  3. Set appropriate max_steps for agents to prevent infinite loops

  4. Use sessions to maintain conversation context

  5. Type your tool functions for better validation and documentation

  6. Use safety features in production environments

  7. Enable tracing for debugging complex workflows

Requirements

  • Python >= 3.10
  • OpenAI API key (for OpenAI models)

License

MIT License - see LICENSE file for details

Testing

Running Tests

# Run unit tests
make test

# Run notebook validation tests
make test-notebooks-lax

# Test specific notebook
pytest --nbval-lax examples/tutorials/01_setup_and_installation.ipynb

Notebook Validation

All tutorial notebooks are automatically tested to ensure they work correctly:

# Install test dependencies
pip install -e ".[test]"

# Validate notebooks (recommended - ignores output differences)
make test-notebooks-lax

# Strict validation (checks outputs match)
make test-notebooks

See docs/notebook_testing.md for detailed testing guidelines.

Contributing

Contributions are welcome! Please see CONTRIBUTING.md for guidelines.

Project Status

AILib is under active development. Current version includes:

  • βœ… Core LLM client abstractions
  • βœ… Multi-provider support (OpenAI, Anthropic) πŸ†•
  • βœ… Chain and agent implementations
  • βœ… Tool system with decorators
  • βœ… Session management
  • βœ… Safety and moderation hooks
  • βœ… Comprehensive tracing
  • βœ… Full async support
  • πŸ”„ More LLM providers (Ollama, Google Gemini - coming soon)
  • πŸ”„ Vector store integrations (coming soon)
  • πŸ”„ Streaming support (coming soon)

See ROADMAP.md for detailed development plans and upcoming features.

Related Projects

Credits

Created by Kapui Cheung as a demonstration of modern Python SDK design, combining the simplicity of Vercel AI SDK with the power of LangChain.

About

WIP; since the package name was already taken on PyPI, we will be changing the name.

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages