Skip to content

Update model references from o3-mini to o4-mini and add Gemini models #1712

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 19, 2025

Conversation

mrT23
Copy link
Collaborator

@mrT23 mrT23 commented Apr 19, 2025

PR Type

Enhancement, Documentation, Bug fix


Description

  • Updated model references from o3-mini to o4-mini.

  • Added support for new Gemini models (gemini-2.5-pro and gemini-2.5-flash).

  • Enhanced error handling for undefined models in MAX_TOKENS.

  • Updated documentation to reflect new model options and configurations.


Changes walkthrough 📝

Relevant files
Enhancement
__init__.py
Added new Gemini models to `MAX_TOKENS`                                   

pr_agent/algo/init.py

  • Added new Gemini models (gemini-2.5-pro-preview-03-25 and
    gemini-2.5-flash-preview-04-17) to MAX_TOKENS.
  • +2/-0     
    Error handling
    utils.py
    Enhanced error handling for undefined models                         

    pr_agent/algo/utils.py

  • Improved error handling for undefined models in MAX_TOKENS.
  • Added a logger error message for better debugging.
  • +1/-0     
    Documentation
    pr_agent_pro.md
    Updated model references and improved formatting                 

    docs/docs/overview/pr_agent_pro.md

  • Updated model references from o3-mini to o4-mini.
  • Improved formatting and descriptions for model selection.
  • +10/-10 
    qodo_merge_models.md
    Updated model references and added Gemini configurations 

    docs/docs/usage-guide/qodo_merge_models.md

  • Replaced o3-mini with o4-mini in model references.
  • Added configuration examples for new Gemini models.
  • +23/-6   
    Configuration changes
    configuration.toml
    Updated default and fallback model configurations               

    pr_agent/settings/configuration.toml

  • Updated default model to o4-mini.
  • Adjusted fallback model to gpt-4.1.
  • +2/-2     

    Need help?
  • Type /help how to ... in the comments thread for any questions about Qodo Merge usage.
  • Check out the documentation for more information.
  • Copy link
    Contributor

    PR Reviewer Guide 🔍

    Here are some key observations to aid the review process:

    ⏱️ Estimated effort to review: 2 🔵🔵⚪⚪⚪
    🧪 No relevant tests
    🔒 No security concerns identified
    ⚡ Recommended focus areas for review

    Model Inconsistency

    The default model is set to o4-mini but the PR description mentions updating from o3-mini to o4-mini. The file already had o4-mini as the default model before the change, which is inconsistent with the PR description.

    model="o4-mini"
    fallback_models=["gpt-4.1"]
    Error Message Clarity

    The added error message is helpful but could be more specific about what action is needed. Consider suggesting how to add the model to MAX_TOKENS or where to set custom_model_max_tokens.

    get_logger().error(f"Model {model} is not defined in MAX_TOKENS in ./pr_agent/algo/__init__.py and no custom_model_max_tokens is set")
    raise Exception(f"Ensure {model} is defined in MAX_TOKENS in ./pr_agent/algo/__init__.py or set a positive value for it in config.custom_model_max_tokens")

    Copy link
    Contributor

    PR Code Suggestions ✨

    Explore these optional code suggestions:

    CategorySuggestion                                                                                                                                    Impact
    General
    Remove hardcoded file path

    The error message is too specific about the file path. Use a relative import
    path that matches the actual module structure rather than hardcoding the file
    path with "./pr_agent/algo/init.py", which might not be accurate in all
    deployment environments.

    pr_agent/algo/utils.py [881]

    -get_logger().error(f"Model {model} is not defined in MAX_TOKENS in ./pr_agent/algo/__init__.py and no custom_model_max_tokens is set")
    +get_logger().error(f"Model {model} is not defined in MAX_TOKENS and no custom_model_max_tokens is set")
    • Apply this suggestion
    Suggestion importance[1-10]: 7

    __

    Why: The suggestion correctly identifies that hardcoded file paths can cause issues in different deployment environments. Removing the specific path reference makes the error message more maintainable and less likely to become outdated if the file structure changes.

    Medium
    • More
    • Author self-review: I have reviewed the PR code suggestions, and addressed the relevant ones.

    @mrT23 mrT23 merged commit baf361f into main Apr 19, 2025
    2 checks passed
    @mrT23 mrT23 deleted the tr/multi_model branch April 19, 2025 06:29
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Projects
    None yet
    Development

    Successfully merging this pull request may close these issues.

    2 participants