fix: Support OPENAI_BASE_URL and MODEL_NAME for Ollama integration#526
fix: Support OPENAI_BASE_URL and MODEL_NAME for Ollama integration#526MervinPraison merged 3 commits intomainfrom
Conversation
… Ollama integration - Fixed auto.py to check OPENAI_BASE_URL before OPENAI_API_BASE - Added support for MODEL_NAME environment variable (community requested) - Added support for OLLAMA_API_BASE environment variable - Applied same fixes to cli.py for consistency - Added comprehensive test script to verify the fix Fixes #394 - Users can now use standard OpenAI SDK environment variables: - OPENAI_BASE_URL=http://localhost:11434/v1 - MODEL_NAME=deepseek-r1:14b - OPENAI_API_KEY=NA Co-authored-by: MervinPraison <MervinPraison@users.noreply.github.com>
The test file was created to verify the Ollama integration fix works correctly and is no longer needed in the repository.
…riables - Added standard OpenAI SDK environment variables (OPENAI_BASE_URL, MODEL_NAME) - Documented alternative patterns (OLLAMA_API_BASE) from community feedback - Added complete usage example with praisonai --init - Maintained backward compatibility documentation for legacy patterns Related to #394 - Now users have clear documentation on how to properly configure Ollama with the fixed environment variable support.
|
Caution Review failedThe pull request is closed. WalkthroughThe changes introduce improved environment variable handling for model and API base URL configuration in PraisonAI, supporting multiple naming patterns and fallback logic. Documentation is updated to clarify configuration options and usage with Ollama, while internal code in both the CLI and auto modules now prioritizes various environment variables for flexible model selection. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant PraisonAI CLI/Auto
participant Environment
User->>PraisonAI CLI/Auto: Start initialization
PraisonAI CLI/Auto->>Environment: Retrieve MODEL_NAME / OPENAI_MODEL_NAME
PraisonAI CLI/Auto->>Environment: Retrieve OPENAI_BASE_URL / OPENAI_API_BASE / OLLAMA_API_BASE
PraisonAI CLI/Auto->>Environment: Retrieve OPENAI_API_KEY
PraisonAI CLI/Auto->>PraisonAI CLI/Auto: Apply fallback logic for model and base URL
PraisonAI CLI/Auto->>User: Complete initialization with resolved config
Assessment against linked issues
Possibly related PRs
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (3)
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Hello @MervinPraison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
Hello team, Gemini here, providing a summary for this pull request. This PR aims to significantly improve the integration with local Ollama models by making the application more flexible in how it detects the API base URL and model name. It introduces support for standard OpenAI SDK environment variables (OPENAI_BASE_URL, MODEL_NAME) and also adds support for a community-recommended pattern (OLLAMA_API_BASE). These changes are applied to both the auto.py and cli.py entry points, ensuring consistency. Additionally, the documentation for Ollama integration has been updated to reflect these new supported patterns and provide clear usage examples.
Highlights
- Environment Variable Flexibility: The core change is enhancing the application's ability to read API configuration from multiple environment variables, prioritizing standard OpenAI SDK variables (
OPENAI_BASE_URL,MODEL_NAME) while also supporting others (OPENAI_API_BASE,OPENAI_MODEL_NAME,OLLAMA_API_BASE). - Ollama Integration Improvement: This PR specifically targets better support for local Ollama instances by aligning with common environment variable practices used by the OpenAI SDK and the broader community.
- Code Consistency: The logic for handling these environment variables is applied consistently across both the
auto.pyandcli.pyfiles. - Documentation Update: The Ollama documentation (
docs/models/ollama.mdx) has been updated to clearly explain the new recommended and alternative environment variable configurations and provide detailed usage instructions.
Changelog
Click here to see the changelog
- docs/models/ollama.mdx
- Expanded the Ollama integration documentation to include multiple supported environment variable patterns: standard OpenAI SDK (
OPENAI_BASE_URL,MODEL_NAME), community recommended (OLLAMA_API_BASE,MODEL_NAME), and legacy (OPENAI_API_BASE,OPENAI_MODEL_NAME). - Added detailed usage instructions for running with
praisonai --init, including steps for starting Ollama, pulling a model, setting variables, and initializing agents.
- Expanded the Ollama integration documentation to include multiple supported environment variable patterns: standard OpenAI SDK (
- src/praisonai/praisonai/auto.py
- Modified the
__init__method to retrieve the model name by checkingMODEL_NAMEfirst, then falling back toOPENAI_MODEL_NAME, with a default ofgpt-4o(lines 95-96). - Modified the
__init__method to retrieve the base URL by checkingOPENAI_BASE_URLfirst, thenOPENAI_API_BASE, thenOLLAMA_API_BASE, with a default ofhttps://api.openai.com/v1(lines 98-104). - Updated the
config_listdictionary to use the newly determinedmodel_nameandbase_urlvariables (lines 108-109).
- Modified the
- src/praisonai/praisonai/cli.py
- Modified the
__init__method to retrieve the model name by checkingMODEL_NAMEfirst, then falling back toOPENAI_MODEL_NAME, with a default ofgpt-4o(lines 118-119). - Modified the
__init__method to retrieve the base URL by checkingOPENAI_BASE_URLfirst, thenOPENAI_API_BASE, thenOLLAMA_API_BASE, with a default ofhttps://api.openai.com/v1(lines 121-127). - Updated the
config_listdictionary to use the newly determinedmodel_nameandbase_urlvariables (lines 132-133).
- Modified the
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Config keys abound,
Base URL, model found.
Local AI runs.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This pull request significantly improves the Ollama integration by adding support for more standard and flexible environment variable configurations, which is a great enhancement for usability. The documentation updates are also very helpful.
I've identified a key area for improvement regarding how OLLAMA_API_BASE is processed to ensure it correctly points to the /v1 API endpoint, and a suggestion for future maintainability regarding duplicated logic. Addressing the OLLAMA_API_BASE handling will be important for the reliability of this feature when using the community-recommended pattern.
Summary of Findings
- OLLAMA_API_BASE Path Handling: The
OLLAMA_API_BASEenvironment variable, if set to a URL likehttp://localhost:11434(as documented in one pattern), might not include the required/v1suffix. The current code would use this URL as-is, potentially leading to connection errors with the OpenAI SDK which expects the full versioned path. This affects bothauto.pyandcli.py. - Code Duplication: The logic for determining
model_nameandbase_urlfrom environment variables is duplicated insrc/praisonai/praisonai/auto.pyandsrc/praisonai/praisonai/cli.py. Consolidating this into a shared utility function could improve maintainability. (Severity: medium, not commented inline due to settings, but noted for future improvement).
Merge Readiness
The pull request introduces valuable flexibility for Ollama integration. However, there's a high-severity issue concerning the handling of the OLLAMA_API_BASE variable that needs to be addressed to ensure correct functionality when users follow the documented 'Community recommended pattern'. I've provided suggestions for this in auto.py and cli.py.
Once these changes are made, the PR should be in a much better state for merging. As I am an AI, I am not authorized to approve pull requests; please ensure further review and approval from team members before merging.
| base_url = ( | ||
| os.environ.get("OPENAI_BASE_URL") or | ||
| os.environ.get("OPENAI_API_BASE") or | ||
| os.environ.get("OLLAMA_API_BASE", "https://api.openai.com/v1") | ||
| ) |
There was a problem hiding this comment.
The current logic for base_url correctly prioritizes environment variables. However, if OLLAMA_API_BASE is set (e.g., to http://localhost:11434 as per the 'Community recommended pattern' in the documentation) and does not include the /v1 suffix, this value will be used as is. The OpenAI SDK typically requires the base_url to point to the versioned API endpoint (e.g., http://localhost:11434/v1).
To ensure this works correctly, especially with the documented OLLAMA_API_BASE pattern, would it be better to process OLLAMA_API_BASE to ensure it includes /v1 if it's the chosen variable and doesn't already have it? This would make the integration more robust.
# Process OLLAMA_API_BASE to ensure it includes /v1 if set
ollama_api_base_env = os.environ.get("OLLAMA_API_BASE")
processed_ollama_api_base = None
if ollama_api_base_env:
# Ensure the URL is correctly formatted for /v1 endpoint
temp_url = ollama_api_base_env.rstrip('/')
if not temp_url.endswith('/v1'):
processed_ollama_api_base = temp_url + '/v1'
else:
processed_ollama_api_base = temp_url
base_url = (
os.environ.get("OPENAI_BASE_URL") or
os.environ.get("OPENAI_API_BASE") or
processed_ollama_api_base or # Use the processed value
"https://api.openai.com/v1" # Default if all others are None
)| base_url = ( | ||
| os.environ.get("OPENAI_BASE_URL") or | ||
| os.environ.get("OPENAI_API_BASE") or | ||
| os.environ.get("OLLAMA_API_BASE", "https://api.openai.com/v1") | ||
| ) |
There was a problem hiding this comment.
Similar to the comment in auto.py, the handling of OLLAMA_API_BASE here could lead to issues if it's provided without the /v1 suffix (e.g., http://localhost:11434). The OpenAI SDK generally expects the full versioned API path.
To ensure robustness, especially aligning with the 'Community recommended pattern' in the documentation for OLLAMA_API_BASE, should we process this variable to append /v1 if it's chosen and missing the suffix?
| base_url = ( | |
| os.environ.get("OPENAI_BASE_URL") or | |
| os.environ.get("OPENAI_API_BASE") or | |
| os.environ.get("OLLAMA_API_BASE", "https://api.openai.com/v1") | |
| ) | |
| # Process OLLAMA_API_BASE to ensure it includes /v1 if set | |
| ollama_api_base_env = os.environ.get("OLLAMA_API_BASE") | |
| processed_ollama_api_base = None | |
| if ollama_api_base_env: | |
| # Ensure the URL is correctly formatted for /v1 endpoint | |
| temp_url = ollama_api_base_env.rstrip('/') | |
| if not temp_url.endswith('/v1'): | |
| processed_ollama_api_base = temp_url + '/v1' | |
| else: | |
| processed_ollama_api_base = temp_url | |
| base_url = ( | |
| os.environ.get("OPENAI_BASE_URL") or | |
| os.environ.get("OPENAI_API_BASE") or | |
| processed_ollama_api_base or # Use the processed value | |
| "https://api.openai.com/v1" # Default if all others are None | |
| ) |
| **Alternative Environment Variable Patterns:** | ||
| ```bash | ||
| # Community recommended pattern | ||
| export OLLAMA_API_BASE=http://localhost:11434 |
There was a problem hiding this comment.
It's good that you've documented the OLLAMA_API_BASE pattern like this (http://localhost:11434).
Could you confirm if the intention is that users provide the base URL without the /v1 suffix for this specific variable?
If so, the Python code that consumes this variable needs to ensure /v1 is appended before use, as the OpenAI client typically expects the full path to the versioned API endpoint (e.g., http://localhost:11434/v1). I've added specific suggestions in auto.py and cli.py to handle this to ensure this documentation aligns with robust code behavior.
…394-20250528_143330 fix: Support OPENAI_BASE_URL and MODEL_NAME for Ollama integration
Fixes #394 - PraisonAI now properly supports local Ollama models
Changes
Testing
Users can now use standard OpenAI SDK environment variables:
Generated with Claude Code
Summary by CodeRabbit