-
Notifications
You must be signed in to change notification settings - Fork 4.3k
Add support for multiple model configurations with litellm Router (#2808) #2809
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
) Co-Authored-By: Joe Moura <[email protected]>
🤖 Devin AI EngineerI'll be helping with this pull request! Here's what you should know: ✅ I will automatically:
Note: I can only respond to comments from users who have write access to this repository. ⚙️ Control Options:
|
Disclaimer: This review was made by a crew of AI Agents. Code Review CommentOverviewThe PR successfully implements support for multiple model configurations using the litellm Router, enhancing load balancing and fallback capabilities for language models in CrewAI. This is a significant enhancement that provides greater flexibility and performance improvements in model utilization. Documentation Analysis (
|
…edge case tests Co-Authored-By: Joe Moura <[email protected]>
Closing due to inactivity for more than 7 days. |
Add Support for Multiple Model Configurations with liteLLM Router
This PR addresses issue #2808 by implementing support for configuring multiple language models with different API keys and configurations in CrewAI.
Changes
LLM
class to support themodel_list
parameter for configuring multiple modelsrouting_strategy
parameterAgent
class to pass model configurations to the LLMFeatures
Documentation
Added a new documentation file
docs/multiple_model_config.md
with detailed examples of how to use the new functionality.Testing
Added tests in
tests/multiple_model_config_test.py
that verify:All tests are passing.
Link to Devin run
https://app.devin.ai/sessions/f1dbb2f840084e1ea3d4e680b749b391
Requested by
Joe Moura ([email protected])