-
Notifications
You must be signed in to change notification settings - Fork 4.2k
Fix Gemini Model Integration Issues (#2803) #2804
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Fix Gemini Model Integration Issues (#2803) #2804
Conversation
Co-Authored-By: Joe Moura <[email protected]>
🤖 Devin AI EngineerI'll be helping with this pull request! Here's what you should know: ✅ I will automatically:
Note: I can only respond to comments from users who have write access to this repository. ⚙️ Control Options:
|
Disclaimer: This review was made by a crew of AI Agents. Code Review Comment: Gemini Model Integration FixOverviewThis pull request addresses integration issues with Google's Gemini models by enhancing model name normalization and API key handling. The changes predominantly impact two files: src/crewai/llm.py ChangesStrengths:
Issues and Recommendations:1. Constants DefinitionIssue: class LLM:
GEMINI_IDENTIFIERS = ("gemini", "gemma-") 2. Error HandlingIssue: Absence of error handling for invalid model formats can lead to unexpected crashes. def _normalize_gemini_model(self, model: str) -> str:
if not isinstance(model, str):
raise ValueError(f"Model must be a string, got {type(model)}")
if not model.strip():
raise ValueError("Model name cannot be empty") 3. Logging EnhancementIssue: Logging implementation is basic and lacks context. def _prepare_completion_params(self, messages, **kwargs):
if self._is_gemini_model(self.model):
logging.info("Preparing Gemini model with", extra={"model": self.model}) tests/llm_test.py ChangesStrengths:
Issues and Recommendations:1. Test OrganizationIssue: Related tests are scattered across the file. class TestGeminiIntegration:
def test_is_gemini_model(self):
# Test implementation 2. Test Data ManagementIssue: Hard-coded test values can lead to repetitive updates. @pytest.fixture
def gemini_model_variants():
return {
"valid": ["gemini-pro", "gemini/gemini-1.5-pro"],
"invalid": ["gpt-4", "claude-3"]
} 3. Mock Usage EnhancementIssue: Repetitive setup for mocks can clutter test cases. @pytest.fixture
def mock_llm_response():
mock_message = MagicMock(content="Paris")
return MagicMock(choices=[MagicMock(message=mock_message)]) General Suggestions
In summary, this PR represents a significant step forward in improving the code quality and operational robustness of the Gemini model integration. The identified suggestions, if implemented, would further enhance the maintainability and clarity of the code base while ensuring resilience against potential runtime errors. Overall, this is a solid enhancement. Thank you for your hard work on this! |
…handling, enhance logging Co-Authored-By: Joe Moura <[email protected]>
Fix Gemini Model Integration Issues
Fixes #2803
Description
This PR addresses the issue where Google Gemini models fail in CrewAI due to LiteLLM API key and model parsing issues. The fix handles two main problems:
API Key Mapping: LiteLLM expects
GEMINI_API_KEY
but users typically setGOOGLE_API_KEY
. This PR adds automatic mapping fromGOOGLE_API_KEY
toGEMINI_API_KEY
when using Gemini models.Model Name Normalization: Model formats like "models/gemini-pro" or "gemini-pro" aren't parsed correctly as provider/model. This PR adds normalization to ensure all Gemini model names are in the correct format for LiteLLM (gemini/model-name).
Changes
_is_gemini_model
method to detect Gemini models_normalize_gemini_model
method to handle different model formatsGOOGLE_API_KEY
toGEMINI_API_KEY
How to Test
Link to Devin run
https://app.devin.ai/sessions/f9b3766170dd4ab1863f9b34c9f38f96
Requested by: Joe Moura ([email protected])