Skip to content

Add support for multiple model configurations with litellm Router (#2808) #2809

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
213 changes: 213 additions & 0 deletions docs/multiple_model_config.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,213 @@
# Multiple Model Configuration in CrewAI

CrewAI now supports configuring multiple language models with different API keys and configurations. This feature allows you to:

1. Load-balance across multiple model deployments
2. Set up fallback models in case of rate limits or errors
3. Configure different routing strategies for model selection
4. Maintain fine-grained control over model selection and usage

## Basic Usage

You can configure multiple models at the agent level:

```python
from crewai import Agent

# Define model configurations
model_list = [
{
"model_name": "gpt-4o-mini",
"litellm_params": {
"model": "gpt-4o-mini", # Required: model name must be specified here
"api_key": "your-openai-api-key-1"
}
},
{
"model_name": "gpt-3.5-turbo",
"litellm_params": {
"model": "gpt-3.5-turbo", # Required: model name must be specified here
"api_key": "your-openai-api-key-2"
}
},
{
"model_name": "claude-3-sonnet-20240229",
"litellm_params": {
"model": "claude-3-sonnet-20240229", # Required: model name must be specified here
"api_key": "your-anthropic-api-key"
}
}
]

# Create an agent with multiple model configurations
agent = Agent(
role="Data Analyst",
goal="Analyze the data and provide insights",
backstory="You are an expert data analyst with years of experience.",
model_list=model_list,
routing_strategy="simple-shuffle" # Optional routing strategy
)
```

## Routing Strategies

CrewAI supports the following routing strategies for precise control over model selection:

- `simple-shuffle`: Randomly selects a model from the list
- `least-busy`: Routes to the model with the least number of ongoing requests
- `usage-based`: Routes based on token usage across models
- `latency-based`: Routes to the model with the lowest latency
- `cost-based`: Routes to the model with the lowest cost

Example with latency-based routing:

```python
agent = Agent(
role="Data Analyst",
goal="Analyze the data and provide insights",
backstory="You are an expert data analyst with years of experience.",
model_list=model_list,
routing_strategy="latency-based"
)
```

## Direct LLM Configuration

You can also configure multiple models directly with the LLM class for more flexibility:

```python
from crewai import LLM

llm = LLM(
model="gpt-4o-mini",
model_list=model_list,
routing_strategy="simple-shuffle"
)
```

## Advanced Configuration

For more advanced configurations, you can specify additional parameters for each model to handle complex use cases:

```python
model_list = [
{
"model_name": "gpt-4o-mini",
"litellm_params": {
"model": "gpt-4o-mini", # Required: model name must be specified here
"api_key": "your-openai-api-key-1",
"temperature": 0.7
},
"tpm": 100000, # Tokens per minute limit
"rpm": 1000 # Requests per minute limit
},
{
"model_name": "gpt-3.5-turbo",
"litellm_params": {
"model": "gpt-3.5-turbo", # Required: model name must be specified here
"api_key": "your-openai-api-key-2",
"temperature": 0.5
}
}
]
```

## Error Handling and Troubleshooting

When working with multiple model configurations, you may encounter various issues. Here are some common problems and their solutions:

### Missing Required Parameters

**Problem**: Router initialization fails with an error about missing parameters.

**Solution**: Ensure each model configuration in `model_list` includes both `model_name` and `litellm_params` with the required `model` parameter:

```python
# Correct configuration
model_config = {
"model_name": "gpt-4o-mini", # Required
"litellm_params": {
"model": "gpt-4o-mini", # Required
"api_key": "your-api-key"
}
}
```

### Invalid Routing Strategy

**Problem**: Error when specifying an unsupported routing strategy.

**Solution**: Use only the supported routing strategies:

```python
# Valid routing strategies
valid_strategies = [
"simple-shuffle",
"least-busy",
"usage-based",
"latency-based",
"cost-based"
]
```

### API Key Authentication Errors

**Problem**: Authentication errors when making API calls.

**Solution**: Verify that all API keys are valid and have the necessary permissions:

```python
# Check environment variables first
import os
os.environ.get("OPENAI_API_KEY") # Should be set if using OpenAI models

# Or explicitly provide in the configuration
model_list = [{
"model_name": "gpt-4o-mini",
"litellm_params": {
"model": "gpt-4o-mini",
"api_key": "valid-api-key-here" # Ensure this is correct
}
}]
```

### Rate Limit Handling

**Problem**: Encountering rate limits with multiple models.

**Solution**: Configure rate limits and implement fallback mechanisms:

```python
model_list = [
{
"model_name": "primary-model",
"litellm_params": {"model": "primary-model", "api_key": "key1"},
"rpm": 100 # Requests per minute
},
{
"model_name": "fallback-model",
"litellm_params": {"model": "fallback-model", "api_key": "key2"}
}
]

# Configure with fallback
llm = LLM(
model="primary-model",
model_list=model_list,
routing_strategy="least-busy" # Will route to fallback when primary is busy
)
```

### Debugging Router Issues

If you're experiencing issues with the router, you can enable verbose logging to get more information:

```python
import litellm
litellm.set_verbose = True

# Then initialize your LLM
llm = LLM(model="gpt-4o-mini", model_list=model_list)
```

This feature leverages litellm's Router functionality under the hood, providing robust load balancing and fallback capabilities for your CrewAI agents. The implementation ensures predictability and consistency in model selection while maintaining security through proper API key management.
37 changes: 32 additions & 5 deletions src/crewai/agent.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
import os
import shutil
import subprocess
from enum import Enum
from typing import Any, Dict, List, Literal, Optional, Union

from pydantic import Field, InstanceOf, PrivateAttr, model_validator
from pydantic import Field, InstanceOf, PrivateAttr, model_validator, field_validator

from crewai.agents import CacheHandler
from crewai.agents.agent_builder.base_agent import BaseAgent
Expand Down Expand Up @@ -86,7 +87,20 @@ class Agent(BaseAgent):
description="Language model that will run the agent.", default=None
)
function_calling_llm: Optional[Any] = Field(
description="Language model that will run the agent.", default=None
description="Language model that will handle function calling for the agent.", default=None
)
class RoutingStrategy(str, Enum):
SIMPLE_SHUFFLE = "simple-shuffle"
LEAST_BUSY = "least-busy"
USAGE_BASED = "usage-based"
LATENCY_BASED = "latency-based"
COST_BASED = "cost-based"

model_list: Optional[List[Dict[str, Any]]] = Field(
default=None, description="List of model configurations for routing between multiple models."
)
routing_strategy: Optional[RoutingStrategy] = Field(
default=None, description="Strategy for routing between multiple models (e.g., 'simple-shuffle', 'least-busy', 'usage-based', 'latency-based', 'cost-based')."
)
system_template: Optional[str] = Field(
default=None, description="System format for the agent."
Expand Down Expand Up @@ -148,18 +162,29 @@ def post_init_setup(self):
# Handle different cases for self.llm
if isinstance(self.llm, str):
# If it's a string, create an LLM instance
self.llm = LLM(model=self.llm)
self.llm = LLM(
model=self.llm,
model_list=self.model_list,
routing_strategy=self.routing_strategy
)
elif isinstance(self.llm, LLM):
# If it's already an LLM instance, keep it as is
pass
if self.model_list and not getattr(self.llm, "model_list", None):
self.llm.model_list = self.model_list
self.llm.routing_strategy = self.routing_strategy
self.llm._initialize_router()
elif self.llm is None:
# Determine the model name from environment variables or use default
model_name = (
os.environ.get("OPENAI_MODEL_NAME")
or os.environ.get("MODEL")
or "gpt-4o-mini"
)
llm_params = {"model": model_name}
llm_params = {
"model": model_name,
"model_list": self.model_list,
"routing_strategy": self.routing_strategy
}

api_base = os.environ.get("OPENAI_API_BASE") or os.environ.get(
"OPENAI_BASE_URL"
Expand Down Expand Up @@ -207,6 +232,8 @@ def post_init_setup(self):
"api_key": getattr(self.llm, "api_key", None),
"base_url": getattr(self.llm, "base_url", None),
"organization": getattr(self.llm, "organization", None),
"model_list": self.model_list,
"routing_strategy": self.routing_strategy,
}
# Remove None values to avoid passing unnecessary parameters
llm_params = {k: v for k, v in llm_params.items() if v is not None}
Expand Down
Loading