Skip to content

Replace default o3-mini with o4-mini #1710

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 19, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/docs/chrome-extension/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

With a single-click installation you will gain access to a context-aware chat on your pull requests code, a toolbar extension with multiple AI feedbacks, Qodo Merge filters, and additional abilities.

The extension is powered by top code models like Claude 3.7 Sonnet and o3-mini. All the extension's features are free to use on public repositories.
The extension is powered by top code models like Claude 3.7 Sonnet and o4-mini. All the extension's features are free to use on public repositories.

For private repositories, you will need to install [Qodo Merge](https://github.com/apps/qodo-merge-pro){:target="_blank"} in addition to the extension (Quick GitHub app setup with a 14-day free trial. No credit card needed).
For a demonstration of how to install Qodo Merge and use it with the Chrome extension, please refer to the tutorial video at the provided [link](https://codium.ai/images/pr_agent/private_repos.mp4){:target="_blank"}.
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/installation/locally.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
To run PR-Agent locally, you first need to acquire two keys:

1. An OpenAI key from [here](https://platform.openai.com/api-keys){:target="_blank"}, with access to GPT-4 and o3-mini (or a key for other [language models](https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/), if you prefer).
1. An OpenAI key from [here](https://platform.openai.com/api-keys){:target="_blank"}, with access to GPT-4 and o4-mini (or a key for other [language models](https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/), if you prefer).
2. A personal access token from your Git platform (GitHub, GitLab, BitBucket) with repo scope. GitHub token, for example, can be issued from [here](https://github.com/settings/tokens){:target="_blank"}

## Using Docker image
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/usage-guide/changing_a_model.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
## Changing a model in PR-Agent

See [here](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/algo/__init__.py) for a list of available models.
To use a different model than the default (o3-mini), you need to edit in the [configuration file](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml#L2) the fields:
To use a different model than the default (o4-mini), you need to edit in the [configuration file](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml#L2) the fields:

```toml
[config]
Expand Down Expand Up @@ -311,7 +311,7 @@ To bypass chat templates and temperature controls, set `config.custom_reasoning_
reasoning_efffort= = "medium" # "low", "medium", "high"
```

With the OpenAI models that support reasoning effort (eg: o3-mini), you can specify its reasoning effort via `config` section. The default value is `medium`. You can change it to `high` or `low` based on your usage.
With the OpenAI models that support reasoning effort (eg: o4-mini), you can specify its reasoning effort via `config` section. The default value is `medium`. You can change it to `high` or `low` based on your usage.

### Anthropic models

Expand Down
2 changes: 1 addition & 1 deletion pr_agent/settings/configuration.toml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

[config]
# models
model="o3-mini"
model="o4-mini"
fallback_models=["gpt-4o-2024-11-20"]
Comment on lines -9 to 10
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/implement change fallback model to 'gpt-4.1'

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Implementation 🛠️

Implementation: Update the fallback model from "gpt-4o-2024-11-20" to "gpt-4.1" as requested.

Suggested change
model="o3-mini"
model="o4-mini"
fallback_models=["gpt-4o-2024-11-20"]
model="o4-mini"
fallback_models=["gpt-4.1"]

See review comment here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe leave it in another change will be cleaner 😆

Copy link
Collaborator

@mrT23 mrT23 Apr 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we are updating to more modern models, I do want to remove the 'gpt-4o' from the config.
it looks outdated and unmaintained.
And its just the fallback model (that should not be called in normal usage)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the suggestion. I see the importance of updating to modern models. Would you mind if I submit a separate PR for this change? It would help keep the commits cleaner and make it easier for maintainers to track changes. What do you think?

#model_weak="gpt-4o-mini-2024-07-18" # optional, a weaker model to use for some easier tasks
# CLI
Expand Down