Skip to content

Fix Gemini Model Integration Issues (#2803) #2804

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

devin-ai-integration[bot]
Copy link
Contributor

Fix Gemini Model Integration Issues

Fixes #2803

Description

This PR addresses the issue where Google Gemini models fail in CrewAI due to LiteLLM API key and model parsing issues. The fix handles two main problems:

  1. API Key Mapping: LiteLLM expects GEMINI_API_KEY but users typically set GOOGLE_API_KEY. This PR adds automatic mapping from GOOGLE_API_KEY to GEMINI_API_KEY when using Gemini models.

  2. Model Name Normalization: Model formats like "models/gemini-pro" or "gemini-pro" aren't parsed correctly as provider/model. This PR adds normalization to ensure all Gemini model names are in the correct format for LiteLLM (gemini/model-name).

Changes

  • Added _is_gemini_model method to detect Gemini models
  • Added _normalize_gemini_model method to handle different model formats
  • Added API key mapping from GOOGLE_API_KEY to GEMINI_API_KEY
  • Added tests for the new methods

How to Test

from crewai import Agent, Task, Crew, LLM
import os

# Set API key from Google AI Studio
os.environ["GOOGLE_API_KEY"] = "your-api-key"

# These model formats now work correctly
llm1 = LLM(model="models/gemini-pro", temperature=0.7)
llm2 = LLM(model="gemini-pro", temperature=0.7)
llm3 = LLM(model="gemini/gemini-pro", temperature=0.7)

agent = Agent(role="AI Expert", goal="Explain things", backstory="Expert", llm=llm1)

task = Task(
    description="Explain tokenization",
    expected_output="Simple explanation",
    agent=agent
)

crew = Crew(agents=[agent], tasks=[task], verbose=True)
crew.kickoff()

Link to Devin run

https://app.devin.ai/sessions/f9b3766170dd4ab1863f9b34c9f38f96

Requested by: Joe Moura ([email protected])

Copy link
Contributor Author

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR. Add '(aside)' to your comment to have me ignore it.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

@joaomdmoura
Copy link
Collaborator

Disclaimer: This review was made by a crew of AI Agents.

Code Review Comment: Gemini Model Integration Fix

Overview

This pull request addresses integration issues with Google's Gemini models by enhancing model name normalization and API key handling. The changes predominantly impact two files: src/crewai/llm.py and tests/llm_test.py.

src/crewai/llm.py Changes

Strengths:

  • Improved Detection: Comprehensive addition of Gemini model detection and normalization.
  • API Key Mapping: Effective implementation of API key conversion from GOOGLE_API_KEY to GEMINI_API_KEY.
  • Documentation: Well-structured docstrings enhance usability.
  • Modular Design: Clear separation of concerns through helper methods.

Issues and Recommendations:

1. Constants Definition

Issue: GEMINI_IDENTIFIERS is defined within method scope, making it harder to maintain.
Recommendation: Moving constants to class level enhances maintainability.

class LLM:
    GEMINI_IDENTIFIERS = ("gemini", "gemma-")

2. Error Handling

Issue: Absence of error handling for invalid model formats can lead to unexpected crashes.
Recommendation: Introduce validation and error handling to safeguard against invalid input.

def _normalize_gemini_model(self, model: str) -> str:
    if not isinstance(model, str):
        raise ValueError(f"Model must be a string, got {type(model)}")
    if not model.strip():
        raise ValueError("Model name cannot be empty")

3. Logging Enhancement

Issue: Logging implementation is basic and lacks context.
Recommendation: Enhance logging to include contextual information.

def _prepare_completion_params(self, messages, **kwargs):
    if self._is_gemini_model(self.model):
        logging.info("Preparing Gemini model with", extra={"model": self.model})

tests/llm_test.py Changes

Strengths:

  • Good test coverage reflecting new functionality.
  • Effective use of parameterized tests and environment cleanup procedures.

Issues and Recommendations:

1. Test Organization

Issue: Related tests are scattered across the file.
Recommendation: Group related tests into cohesive classes to improve readability.

class TestGeminiIntegration:
    def test_is_gemini_model(self):
        # Test implementation

2. Test Data Management

Issue: Hard-coded test values can lead to repetitive updates.
Recommendation: Utilize test fixtures for commonly used data structures to streamline maintenance.

@pytest.fixture
def gemini_model_variants():
    return {
        "valid": ["gemini-pro", "gemini/gemini-1.5-pro"],
        "invalid": ["gpt-4", "claude-3"]
    }

3. Mock Usage Enhancement

Issue: Repetitive setup for mocks can clutter test cases.
Recommendation: Establish a helper function for mock creation.

@pytest.fixture
def mock_llm_response():
    mock_message = MagicMock(content="Paris")
    return MagicMock(choices=[MagicMock(message=mock_message)])

General Suggestions

  1. Type Hints: Implement type hints throughout the codebase for clarity and maintainability.
  2. Documentation: Enhance docstrings with real-world usage examples.
  3. Configuration Management: Consider creating a dedicated class for managing API keys to centralize and streamline configuration management.

In summary, this PR represents a significant step forward in improving the code quality and operational robustness of the Gemini model integration. The identified suggestions, if implemented, would further enhance the maintainability and clarity of the code base while ensuring resilience against potential runtime errors. Overall, this is a solid enhancement.

Thank you for your hard work on this!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG]Gemini Model Fails in CrewAI Due to LiteLLM API Key + Model Parsing Issues
1 participant