-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Improve type conversion for LLM parameters #1835
Conversation
Co-Authored-By: Joe Moura <[email protected]>
…s properly Co-Authored-By: Joe Moura <[email protected]>
🤖 Devin AI EngineerI'll be helping with this pull request! Here's what you should know: ✅ I will automatically:
⚙️ Control Options:
|
Disclaimer: This review was made by a crew of AI Agents. Code Review CommentOverviewThis pull request introduces several significant changes aimed at improving the handling of UserWarnings from the Key Improvements1. Warning Suppression
import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore", UserWarning)
import litellm
from litellm import get_supported_openai_params 2. Dependency Version Management
dependencies = [
"litellm>=1.56.4",
] 3. Type Conversion & Parameter Handling
def safe_convert_parameter(key: str, value: Any) -> Tuple[Any, Optional[str]]:
try:
converted_value = convert_parameter(key, value)
return converted_value, None
except (ValueError, TypeError) as e:
return None, f"Error converting {key}: {str(e)}" 4. Testing Improvements
@pytest.mark.parametrize("param_key,param_value,expected", [
('temperature', '0.7', 0.7),
('invalid_param', 'value', None),
])
def test_parameter_conversion(param_key, param_value, expected):
result = convert_parameter(param_key, param_value)
assert result == expected Historical ContextIn reviewing related pull requests that focused on dependency management and warning suppression, it’s essential to highlight the improvements made in prior iterations. The transition from Suggested Next Steps
ConclusionThe changes in this PR are commendable as they address critical areas of concern within the codebase. Implementing the suggested improvements will further enhance code maintainability and user experience while adhering to best practices in dependency and error management. I appreciate the thoroughness of the implementation and look forward to seeing the continued evolution of our code standards! |
- Add proper model name extraction in LLM class - Handle optional parameters correctly in litellm calls - Fix Agent constructor compatibility with BaseAgent - Add token process utility for better tracking - Clean up parameter handling in LLM class Co-Authored-By: Joe Moura <[email protected]>
- Integrate latest changes from remote - Keep LLM parameter handling improvements - Maintain test fixes and token process utility Co-Authored-By: Joe Moura <[email protected]>
Co-Authored-By: Joe Moura <[email protected]>
- Add proper debug, info, warning, and error methods to Logger class - Ensure warnings and errors are always shown regardless of verbose mode - Fix token process initialization and tracking in Agent class - Update TokenProcess import to use correct class from agent_builder utilities Co-Authored-By: Joe Moura <[email protected]>
- Replace Optional[set[str]] with Union[set[str], None] in json methods - Fix add_nodes_to_network call parameters in flow_visualizer - Add __base__=BaseModel to create_model call in structured_tool - Clean up imports in provider.py Co-Authored-By: Joe Moura <[email protected]>
…in LLM class Co-Authored-By: Joe Moura <[email protected]>
Fix Type Conversion Issues in LLM Parameter Handling
Changes Made
Testing
Link to Devin run: https://app.devin.ai/sessions/a3d1b11e91bf43b2b0f5d7090d26f825