Skip to content

Latest commit

 

History

History
164 lines (130 loc) · 12.3 KB

File metadata and controls

164 lines (130 loc) · 12.3 KB

CLAUDE.md

This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.

Development Commands

Installation

  • make install-dev - Install core development dependencies
  • make install-proxy-dev - Install proxy development dependencies with full feature set
  • make install-test-deps - Install all test dependencies

Testing

  • make test - Run all tests
  • make test-unit - Run unit tests (tests/test_litellm) with 4 parallel workers
  • make test-integration - Run integration tests (excludes unit tests)
  • pytest tests/ - Direct pytest execution

Code Quality

  • make lint - Run all linting (Ruff, MyPy, Black, circular imports, import safety)
  • make format - Apply Black code formatting
  • make lint-ruff - Run Ruff linting only
  • make lint-mypy - Run MyPy type checking only

Single Test Files

  • poetry run pytest tests/path/to/test_file.py -v - Run specific test file
  • poetry run pytest tests/path/to/test_file.py::test_function -v - Run specific test

Running Scripts

  • poetry run python script.py - Run Python scripts (use for non-test files)

GitHub Issue & PR Templates

When contributing to the project, use the appropriate templates:

Bug Reports (.github/ISSUE_TEMPLATE/bug_report.yml):

  • Describe what happened vs. what you expected
  • Include relevant log output
  • Specify your LiteLLM version

Feature Requests (.github/ISSUE_TEMPLATE/feature_request.yml):

  • Describe the feature clearly
  • Explain the motivation and use case

Pull Requests (.github/pull_request_template.md):

  • Add at least 1 test in tests/litellm/
  • Ensure make test-unit passes

Architecture Overview

LiteLLM is a unified interface for 100+ LLM providers with two main components:

Core Library (litellm/)

  • Main entry point: litellm/main.py - Contains core completion() function
  • Provider implementations: litellm/llms/ - Each provider has its own subdirectory
  • Router system: litellm/router.py + litellm/router_utils/ - Load balancing and fallback logic
  • Type definitions: litellm/types/ - Pydantic models and type hints
  • Integrations: litellm/integrations/ - Third-party observability, caching, logging
  • Caching: litellm/caching/ - Multiple cache backends (Redis, in-memory, S3, etc.)

Proxy Server (litellm/proxy/)

  • Main server: proxy_server.py - FastAPI application
  • Authentication: auth/ - API key management, JWT, OAuth2
  • Database: db/ - Prisma ORM with PostgreSQL/SQLite support
  • Management endpoints: management_endpoints/ - Admin APIs for keys, teams, models
  • Pass-through endpoints: pass_through_endpoints/ - Provider-specific API forwarding
  • Guardrails: guardrails/ - Safety and content filtering hooks
  • UI Dashboard: Served from _experimental/out/ (Next.js build)

Key Patterns

Provider Implementation

  • Providers inherit from base classes in litellm/llms/base.py
  • Each provider has transformation functions for input/output formatting
  • Support both sync and async operations
  • Handle streaming responses and function calling

Error Handling

  • Provider-specific exceptions mapped to OpenAI-compatible errors
  • Fallback logic handled by Router system
  • Comprehensive logging through litellm/_logging.py

Configuration

  • YAML config files for proxy server (see proxy/example_config_yaml/)
  • Environment variables for API keys and settings
  • Database schema managed via Prisma (proxy/schema.prisma)

Development Notes

Code Style

  • Uses Black formatter, Ruff linter, MyPy type checker
  • Pydantic v2 for data validation
  • Async/await patterns throughout
  • Type hints required for all public APIs
  • Avoid imports within methods — place all imports at the top of the file (module-level). Inline imports inside functions/methods make dependencies harder to trace and hurt readability. The only exception is avoiding circular imports where absolutely necessary.
  • Use dict spread for immutable copies — prefer {**original, "key": new_value} over dict(obj) + mutation. The spread produces the final dict in one step and makes intent clear.
  • Guard at resolution time — when resolving an optional value through a fallback chain (a or b or ""), raise immediately if the resolved result being empty is an error. Don't pass empty strings or sentinel values downstream for the callee to deal with.
  • Extract complex comprehensions to named helpers — a set/dict comprehension that calls into the DB or manager (e.g. "which of these server IDs are OAuth2?") belongs in a named helper function, not inline in the caller.
  • FastAPI parameter declarations — mark required query/form params with = Query(...) / = Form(...) explicitly when other params in the same handler are optional. Mixing str (required) with Optional[str] = None in the same signature causes silent 422s when the required param is missing.

Testing Strategy

  • Unit tests in tests/test_litellm/
  • Integration tests for each provider in tests/llm_translation/
  • Proxy tests in tests/proxy_unit_tests/
  • Load tests in tests/load_tests/
  • Always add tests when adding new entity types or features — if the existing test file covers other entity types, add corresponding tests for the new one
  • Keep monkeypatch stubs in sync with real signatures — when a function gains a new optional parameter, update every fake_* / stub_* in tests that patch it to also accept that kwarg (even as **kwargs). Stale stubs fail with unexpected keyword argument and mask real bugs.
  • Test all branches of name→ID resolution — when adding server/resource lookup that resolves names to UUIDs, test: (1) name resolves and UUID is allowed, (2) name resolves but UUID is not allowed, (3) name does not resolve at all. The silent-fallback path is where access-control bugs hide.

UI / Backend Consistency

  • When wiring a new UI entity type to an existing backend endpoint, verify the backend API contract (single value vs. array, required vs. optional params) and ensure the UI controls match — e.g., use a single-select dropdown when the backend accepts a single value, not a multi-select

MCP OAuth / OpenAPI Transport Mapping

  • TRANSPORT.OPENAPI is a UI-only concept. The backend only accepts "http", "sse", or "stdio". Always map it to "http" before any API call (including pre-OAuth temp-session calls).
  • FastAPI validation errors return detail as an array of {loc, msg, type} objects. Error extractors must handle: array (map .msg), string, nested {error: string}, and fallback.
  • When an MCP server already has authorization_url stored, skip OAuth discovery (_discovery_metadata) — the server URL for OpenAPI MCPs is the spec file, not the API base, and fetching it causes timeouts.
  • client_id should be optional in the /authorize endpoint — if the server has a stored client_id in credentials, use that. Never require callers to re-supply it.

MCP Credential Storage

  • OAuth credentials and BYOK credentials share the litellm_mcpusercredentials table, distinguished by a "type" field in the JSON payload ("oauth2" vs plain string).
  • When deleting OAuth credentials, check type before deleting to avoid accidentally deleting a BYOK credential for the same (user_id, server_id) pair.
  • Always pass the raw expires_at timestamp to the client — never set it to None for expired credentials. Let the frontend compute the "Expired" display state from the timestamp.
  • Use RecordNotFoundError (not bare except Exception) when catching "already deleted" in credential delete endpoints.

Browser Storage Safety (UI)

  • Never write LiteLLM access tokens or API keys to localStorage — use sessionStorage only. localStorage survives browser close and is readable by any injected script (XSS).
  • Shared utility functions (e.g. extractErrorMessage) belong in src/utils/ — never define them inline in hooks or duplicate them across files.

Database Migrations

  • Prisma handles schema migrations
  • Migration files auto-generated with prisma migrate dev
  • Always test migrations against both PostgreSQL and SQLite

Proxy database access

  • Do not write raw SQL for proxy DB operations. Use Prisma model methods instead of execute_raw / query_raw.
  • Use the generated client: prisma_client.db.<model> (e.g. litellm_tooltable, litellm_usertable) with .upsert(), .find_many(), .find_unique(), .update(), .update_many() as appropriate. This avoids schema/client drift, keeps code testable with simple mocks, and matches patterns used in spend logs and other proxy code.
  • No N+1 queries. Never query the DB inside a loop. Batch-fetch with {"in": ids} and distribute in-memory.
  • Batch writes. Use create_many/update_many/delete_many instead of individual calls (these return counts only; update_many/delete_many no-op silently on missing rows). When multiple separate writes target the same table (e.g. in batch_()), order by primary key to avoid deadlocks.
  • Push work to the DB. Filter, sort, group, and aggregate in SQL, not Python. Verify Prisma generates the expected SQL — e.g. prefer group_by over find_many(distinct=...) which does client-side processing.
  • Bound large result sets. Prisma materializes full results in memory. For results over ~10 MB, paginate with take/skip or cursor/take, always with an explicit order. Prefer cursor-based pagination (skip is O(n)). Don't paginate naturally small result sets.
  • Limit fetched columns on wide tables. Use select to fetch only needed fields — returns a partial object, so downstream code must not access unselected fields.
  • Check index coverage. For new or modified queries, check schema.prisma for a supporting index. Prefer extending an existing index (e.g. @@index([a])@@index([a, b])) over adding a new one, unless it's a @@unique. Only add indexes for large/frequent queries.
  • Keep schema files in sync. Apply schema changes to all schema.prisma copies (schema.prisma, litellm/proxy/, litellm-proxy-extras/, litellm-js/spend-logs/ for SpendLogs) with a migration under litellm-proxy-extras/litellm_proxy_extras/migrations/.

Setup Wizard (litellm/setup_wizard.py)

  • The wizard is implemented as a single SetupWizard class with @staticmethod methods — keep it that way. No module-level functions except run_setup_wizard() (the public entrypoint) and pure helpers (color, ANSI).
  • Use litellm.utils.check_valid_key(model, api_key) for credential validation — never roll a custom completion call.
  • Do not hardcode provider env-key names or model lists that already exist in the codebase. Add a test_model field to each provider entry to drive check_valid_key; set it to None for providers that can't be validated with a single API key (Azure, Bedrock, Ollama).

Enterprise Features

  • Enterprise-specific code in enterprise/ directory
  • Optional features enabled via environment variables
  • Separate licensing and authentication for enterprise features

HTTP Client Cache Safety

  • Never close HTTP/SDK clients on cache eviction. LLMClientCache._remove_key() must not call close()/aclose() on evicted clients — they may still be used by in-flight requests. Doing so causes RuntimeError: Cannot send a request, as the client has been closed. after the 1-hour TTL expires. Cleanup happens at shutdown via close_litellm_async_clients().

Troubleshooting: DB schema out of sync after proxy restart

litellm-proxy-extras runs prisma migrate deploy on startup using its own bundled migration files, which may lag behind schema changes in the current worktree. Symptoms: Unknown column, Invalid prisma invocation, or missing data on new fields.

Diagnose: Run \d "TableName" in psql and compare against schema.prisma — missing columns confirm the issue.

Fix options:

  1. Create a Prisma migration (permanent) — run prisma migrate dev --name <description> in the worktree. The generated file will be picked up by prisma migrate deploy on next startup.
  2. Apply manually for local devpsql -d litellm -c "ALTER TABLE ... ADD COLUMN IF NOT EXISTS ..." after each proxy start. Fine for dev, not for production.
  3. Update litellm-proxy-extras — if the package is installed from PyPI, its migration directory must include the new file. Either update the package or run the migration manually until the next release ships it.