Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Improve type conversion for LLM parameters #1835

Closed
wants to merge 12 commits into from

Conversation

devin-ai-integration[bot]
Copy link
Contributor

Fix Type Conversion Issues in LLM Parameter Handling

Changes Made

  • Improved type conversion for LLM parameters in Agent class
  • Added proper handling of None values and type validation
  • Fixed tool-related type checking in crew tests
  • Added pytest.ini to force VCR to use recorded cassettes in CI

Testing

  • Local tests pass
  • Type checking errors resolved
  • Improved error handling for parameter conversion

Link to Devin run: https://app.devin.ai/sessions/a3d1b11e91bf43b2b0f5d7090d26f825

Copy link
Contributor Author

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR. Add "(aside)" to your comment to have me ignore it.
  • Look at CI failures and help fix them

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

@joaomdmoura
Copy link
Collaborator

Disclaimer: This review was made by a crew of AI Agents.

Code Review Comment

Overview

This pull request introduces several significant changes aimed at improving the handling of UserWarnings from the litellm library and managing LLM parameters effectively. Overall, the modifications enhance code clarity, suppress unnecessary log clutter, and ensure dependency management is consistent.

Key Improvements

1. Warning Suppression

  • Modification: Context managers have been implemented in src/crewai/llm.py, src/crewai/utilities/internal_instructor.py, and src/crewai/utilities/token_counter_callback.py to suppress UserWarnings.
  • Code Recommendation: Please ensure that the use of warnings follows this pattern for any new warnings that may arise:
import warnings
with warnings.catch_warnings():
    warnings.simplefilter("ignore", UserWarning)
    import litellm
    from litellm import get_supported_openai_params

2. Dependency Version Management

  • Modification: The litellm version has been updated in pyproject.toml and uv.lock from >=1.44.22 to >=1.56.4.
  • Code Recommendation: Maintain a consistent version specification across files to prevent discrepancies:
dependencies = [
    "litellm>=1.56.4",
]

3. Type Conversion & Parameter Handling

  • Modification: Enhancements made in src/crewai/agent.py for type validation and parameter filtering improve reliability.
  • Code Example: Implement safeguards around value conversions:
def safe_convert_parameter(key: str, value: Any) -> Tuple[Any, Optional[str]]:
    try:
        converted_value = convert_parameter(key, value)
        return converted_value, None
    except (ValueError, TypeError) as e:
        return None, f"Error converting {key}: {str(e)}"

4. Testing Improvements

  • Modification: In tests/pytest.ini and tests/crew_test.py, improvements were made to the test configurations.
  • Code Recommendation: Ensure that parameterized tests are used for better coverage:
@pytest.mark.parametrize("param_key,param_value,expected", [
    ('temperature', '0.7', 0.7),
    ('invalid_param', 'value', None),
])
def test_parameter_conversion(param_key, param_value, expected):
    result = convert_parameter(param_key, param_value)
    assert result == expected

Historical Context

In reviewing related pull requests that focused on dependency management and warning suppression, it’s essential to highlight the improvements made in prior iterations. The transition from litellm version 1.44.22 to 1.56.4 reflects lessons learned from previous PRs where warnings hindered development efficiency. Consistent dependency handling enhances the stability of our project.

Suggested Next Steps

  • Enhance Test Coverage: Including more comprehensive tests for edge cases could ensure robustness against future changes.
  • Documentation: Consider documenting the parameter handling methods clearly, especially for new developers onboarding to the project.

Conclusion

The changes in this PR are commendable as they address critical areas of concern within the codebase. Implementing the suggested improvements will further enhance code maintainability and user experience while adhering to best practices in dependency and error management.

I appreciate the thoroughness of the implementation and look forward to seeing the continued evolution of our code standards!

joaomdmoura and others added 4 commits December 31, 2024 18:41
- Add proper model name extraction in LLM class
- Handle optional parameters correctly in litellm calls
- Fix Agent constructor compatibility with BaseAgent
- Add token process utility for better tracking
- Clean up parameter handling in LLM class

Co-Authored-By: Joe Moura <[email protected]>
- Integrate latest changes from remote
- Keep LLM parameter handling improvements
- Maintain test fixes and token process utility

Co-Authored-By: Joe Moura <[email protected]>
devin-ai-integration bot and others added 3 commits January 1, 2025 20:59
- Add proper debug, info, warning, and error methods to Logger class
- Ensure warnings and errors are always shown regardless of verbose mode
- Fix token process initialization and tracking in Agent class
- Update TokenProcess import to use correct class from agent_builder utilities

Co-Authored-By: Joe Moura <[email protected]>
- Replace Optional[set[str]] with Union[set[str], None] in json methods
- Fix add_nodes_to_network call parameters in flow_visualizer
- Add __base__=BaseModel to create_model call in structured_tool
- Clean up imports in provider.py

Co-Authored-By: Joe Moura <[email protected]>
@joaomdmoura joaomdmoura closed this Jan 2, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants