Skip to content

Fix issue #2843: Exclude stop parameter for models that don't support it #2845

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

devin-ai-integration[bot]
Copy link
Contributor

Fix issue #2843: Exclude stop parameter for models that don't support it

Description

This PR fixes issue #2843 where using the OpenAI O3 model with CrewAI was causing a BadRequestError due to the 'stop' parameter being passed to models that don't support it.

The fix:

  1. Modified the supports_stop_words() method in the LLM class to explicitly check for models known not to support stop words (o3, o3-mini, o4-mini)
  2. Updated the call() method to exclude the 'stop' parameter when the model doesn't support it
  3. Added tests to verify the fix works correctly

Testing

Added the following tests:

  • test_supports_stop_words_for_o3_model: Verifies that supports_stop_words() returns False for o3 model
  • test_supports_stop_words_for_o4_mini_model: Verifies that supports_stop_words() returns False for o4-mini model
  • test_llm_call_excludes_stop_parameter_for_unsupported_models: Verifies that the 'stop' parameter is excluded when calling models that don't support it

All tests pass successfully.

Link to Devin run

https://app.devin.ai/sessions/4b634d0eb22449f9be7b7a5b0ea35b50

Requested by

Joe Moura ([email protected])

Fixes #2843

Copy link
Contributor Author

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR. Add '(aside)' to your comment to have me ignore it.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

@joaomdmoura
Copy link
Collaborator

Disclaimer: This review was made by a crew of AI Agents.

Code Review Comment for PR #2845

Overview

This pull request addresses issue #2843 by adding logic to exclude the stop parameter for specific LLM models that do not support it. The modifications are split between two files: src/crewai/llm.py and tests/llm_test.py.

1. File: src/crewai/llm.py

Code Improvements

  1. Hard-coded Model List:

    • Issue: Models that do not support stop words are hard-coded.
    • Improvement: Consider externalizing this list to a class constant or configuration file for easier maintenance and updates:
      class LLM:
          MODELS_WITHOUT_STOP_SUPPORT = ["o3", "o3-mini", "o4-mini"]
  2. Early Return Optimization:

    • Issue: Current checks can be streamlined.
    • Improvement: Using an early return can enhance readability:
      def supports_stop_words(self) -> bool:
          if any(self.model.startswith(model) for model in self.MODELS_WITHOUT_STOP_SUPPORT):
              return False
          ...
  3. Missing Type Hints:

    • Issue: The function parameters lack type hints, reducing code clarity.
    • Improvement: Adding complete type hints would improve readability and maintainability:
      def call(self, messages: List[Dict[str, str]], callbacks: List[BaseCallbackHandler] = []) -> str:

Links to Historical Context

In reviewing historical PRs, pull requests that modified related sections of llm.py were focused on enhancing model validation and maintaining backward compatibility. This context highlights the importance of ensuring that model parameters are dynamically managed.

2. File: tests/llm_test.py

Code Improvements

  1. Test Organization:

    • Issue: Related tests are not grouped, making it harder to navigate.
    • Improvement: Group related tests in a class for better organization and clarity:
      class TestLLMStopWords:
          ...
  2. Test Case Coverage:

    • Issue: Currently, tests for supported models are absent.
    • Improvement: Adding test cases for models that do support stop words will strengthen test coverage:
      def test_supports_stop_words_for_supported_model():
          llm = LLM(model="gpt-4")
          assert llm.supports_stop_words()
  3. Robust Mock Implementation:

    • Issue: The mock function may not be testing all relevant parameters.
    • Improvement: Enhance the mock with asserted checks for critical parameters:
      def test_llm_call_excludes_stop_parameter_for_unsupported_models(monkeypatch):
          ...

General Recommendations

  • Documentation: Please ensure that all functions, particularly supports_stop_words, contain comprehensive docstrings outlining their purpose and return values.
  • Error Handling: Consider implementing logging to capture when unsupported models are processed to aid in debugging.
  • Version Support: Update the documentation to reflect which models are supported and those that aren't, ensuring better clarity for future contributions.

Security Considerations

There are no apparent vulnerabilities found in this PR. The logic appears safe as no sensitive data is exposed.

Performance Impact

There should be minimal performance implications, with only single instance checks for model validation being introduced.

Overall, this PR presents a well-structured approach to addressing model capability checks and has incorporated necessary improvements in both implementation and testing practices. Please consider the suggestions above to further enhance code quality and maintainability.

…ze methods, add docstrings, enhance tests

Co-Authored-By: Joe Moura <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG] o3 is giving a stop message in crewai, version 0.120.1
1 participant