Skip to content

Conversation

@tanmaysharma2001
Copy link

Closes #156

Summary

Replaced custom provider implementations with LiteLLM to enable unified access to multiple LLM providers (Ollama, Gemini, DeepSeek, OpenAI, Anthropic) through a single, consistent interface.

Changes made

  1. Added LiteLLM Provider (models.py)
  • Created LiteLLMProvider class with unified completion API
  • Supports streaming, JSON format, temperature, and top_p parameters
  • Automatic response format conversion for compatibility
  • Configurable api_base for local providers (Ollama)
  1. Updated LLM Initialization (llm_utils.py)
  • Modified initialize_llm_provider() to use LiteLLM by default
  • Automatic Ollama model prefixing (ollama/{model_name})
  • Maintains backward compatibility with legacy providers (use_litellm=False)
  • Smart provider detection based on model name
  1. Enhanced Provider Support [models.py]
  • Extended ModelProvider enum with:
    • DEEPSEEK
    • OPENAI
    • ANTHROPIC
  1. Configuration Updates (prompt.py)
  • Added model parameters for DeepSeek, OpenAI, and Anthropic models
  • Added provider mappings for new models
  • Added environment variable support:
    • DEEPSEEK_API_KEY
    • OPENAI_API_KEY
    • ANTHROPIC_API_KEY
    • OLLAMA_API_BASE (default: http://localhost:11434)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

Feat: Support for Multiple LLM Providers

1 participant