Skip to content

fix: Support OPENAI_BASE_URL and MODEL_NAME for Ollama integration#526

Merged
MervinPraison merged 3 commits intomainfrom
claude/issue-394-20250528_143330
May 28, 2025
Merged

fix: Support OPENAI_BASE_URL and MODEL_NAME for Ollama integration#526
MervinPraison merged 3 commits intomainfrom
claude/issue-394-20250528_143330

Conversation

@MervinPraison
Copy link
Copy Markdown
Owner

@MervinPraison MervinPraison commented May 28, 2025

Fixes #394 - PraisonAI now properly supports local Ollama models

Changes

  • Fixed auto.py to check OPENAI_BASE_URL before OPENAI_API_BASE
  • Added support for MODEL_NAME environment variable
  • Added support for OLLAMA_API_BASE environment variable
  • Applied same fixes to cli.py for consistency
  • Updated Ollama documentation with all supported patterns

Testing

Users can now use standard OpenAI SDK environment variables:

export OPENAI_BASE_URL=http://localhost:11434/v1
export MODEL_NAME=deepseek-r1:14b
export OPENAI_API_KEY=NA
praisonai --init "Create a story"

Generated with Claude Code

Summary by CodeRabbit

  • Documentation
    • Expanded and clarified instructions for configuring Ollama integration, including detailed environment variable examples and usage scenarios.
  • New Features
    • Enhanced support for multiple environment variable patterns when configuring model name and base URL, improving compatibility with various setups.

claude Bot and others added 3 commits May 28, 2025 14:38
… Ollama integration

- Fixed auto.py to check OPENAI_BASE_URL before OPENAI_API_BASE
- Added support for MODEL_NAME environment variable (community requested)
- Added support for OLLAMA_API_BASE environment variable  
- Applied same fixes to cli.py for consistency
- Added comprehensive test script to verify the fix

Fixes #394 - Users can now use standard OpenAI SDK environment variables:
- OPENAI_BASE_URL=http://localhost:11434/v1 
- MODEL_NAME=deepseek-r1:14b
- OPENAI_API_KEY=NA

Co-authored-by: MervinPraison <MervinPraison@users.noreply.github.com>
The test file was created to verify the Ollama integration fix works correctly
and is no longer needed in the repository.
…riables

- Added standard OpenAI SDK environment variables (OPENAI_BASE_URL, MODEL_NAME)
- Documented alternative patterns (OLLAMA_API_BASE) from community feedback
- Added complete usage example with praisonai --init
- Maintained backward compatibility documentation for legacy patterns

Related to #394 - Now users have clear documentation on how to properly
configure Ollama with the fixed environment variable support.
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 28, 2025

Caution

Review failed

The pull request is closed.

Walkthrough

The changes introduce improved environment variable handling for model and API base URL configuration in PraisonAI, supporting multiple naming patterns and fallback logic. Documentation is updated to clarify configuration options and usage with Ollama, while internal code in both the CLI and auto modules now prioritizes various environment variables for flexible model selection.

Changes

Files/Paths Change Summary
docs/models/ollama.mdx Expanded documentation with detailed environment variable setup examples, usage patterns, and instructions for Ollama integration.
src/praisonai/praisonai/auto.py
src/praisonai/praisonai/cli.py
Updated environment variable resolution logic for model name and base URL, supporting multiple variable names and fallback order.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant PraisonAI CLI/Auto
    participant Environment

    User->>PraisonAI CLI/Auto: Start initialization
    PraisonAI CLI/Auto->>Environment: Retrieve MODEL_NAME / OPENAI_MODEL_NAME
    PraisonAI CLI/Auto->>Environment: Retrieve OPENAI_BASE_URL / OPENAI_API_BASE / OLLAMA_API_BASE
    PraisonAI CLI/Auto->>Environment: Retrieve OPENAI_API_KEY
    PraisonAI CLI/Auto->>PraisonAI CLI/Auto: Apply fallback logic for model and base URL
    PraisonAI CLI/Auto->>User: Complete initialization with resolved config
Loading

Assessment against linked issues

Objective Addressed Explanation
Enable PraisonAI to use a local Ollama model endpoint for chat completions instead of defaulting to OpenAI API, respecting provided environment variables and configuration (#394)

Possibly related PRs

  • Develop #507: Both PRs address environment variable defaults and fallback logic, though this one updates runtime code while Develop #507 focuses on CI workflows.

Poem

In burrows deep, where configs dwell,
The models now can choose quite well—
Ollama local, OpenAI far,
Just set your vars, no matter where you are!
With fallback logic, so robust and neat,
PraisonAI now hops to any beat.
🐇✨


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9f3f7aa and 0d5f9c3.

📒 Files selected for processing (3)
  • docs/models/ollama.mdx (1 hunks)
  • src/praisonai/praisonai/auto.py (1 hunks)
  • src/praisonai/praisonai/cli.py (1 hunks)
✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @MervinPraison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

Summary of Changes

Hello team, Gemini here, providing a summary for this pull request. This PR aims to significantly improve the integration with local Ollama models by making the application more flexible in how it detects the API base URL and model name. It introduces support for standard OpenAI SDK environment variables (OPENAI_BASE_URL, MODEL_NAME) and also adds support for a community-recommended pattern (OLLAMA_API_BASE). These changes are applied to both the auto.py and cli.py entry points, ensuring consistency. Additionally, the documentation for Ollama integration has been updated to reflect these new supported patterns and provide clear usage examples.

Highlights

  • Environment Variable Flexibility: The core change is enhancing the application's ability to read API configuration from multiple environment variables, prioritizing standard OpenAI SDK variables (OPENAI_BASE_URL, MODEL_NAME) while also supporting others (OPENAI_API_BASE, OPENAI_MODEL_NAME, OLLAMA_API_BASE).
  • Ollama Integration Improvement: This PR specifically targets better support for local Ollama instances by aligning with common environment variable practices used by the OpenAI SDK and the broader community.
  • Code Consistency: The logic for handling these environment variables is applied consistently across both the auto.py and cli.py files.
  • Documentation Update: The Ollama documentation (docs/models/ollama.mdx) has been updated to clearly explain the new recommended and alternative environment variable configurations and provide detailed usage instructions.

Changelog

Click here to see the changelog
  • docs/models/ollama.mdx
    • Expanded the Ollama integration documentation to include multiple supported environment variable patterns: standard OpenAI SDK (OPENAI_BASE_URL, MODEL_NAME), community recommended (OLLAMA_API_BASE, MODEL_NAME), and legacy (OPENAI_API_BASE, OPENAI_MODEL_NAME).
    • Added detailed usage instructions for running with praisonai --init, including steps for starting Ollama, pulling a model, setting variables, and initializing agents.
  • src/praisonai/praisonai/auto.py
    • Modified the __init__ method to retrieve the model name by checking MODEL_NAME first, then falling back to OPENAI_MODEL_NAME, with a default of gpt-4o (lines 95-96).
    • Modified the __init__ method to retrieve the base URL by checking OPENAI_BASE_URL first, then OPENAI_API_BASE, then OLLAMA_API_BASE, with a default of https://api.openai.com/v1 (lines 98-104).
    • Updated the config_list dictionary to use the newly determined model_name and base_url variables (lines 108-109).
  • src/praisonai/praisonai/cli.py
    • Modified the __init__ method to retrieve the model name by checking MODEL_NAME first, then falling back to OPENAI_MODEL_NAME, with a default of gpt-4o (lines 118-119).
    • Modified the __init__ method to retrieve the base URL by checking OPENAI_BASE_URL first, then OPENAI_API_BASE, then OLLAMA_API_BASE, with a default of https://api.openai.com/v1 (lines 121-127).
    • Updated the config_list dictionary to use the newly determined model_name and base_url variables (lines 132-133).
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


Config keys abound,
Base URL, model found.
Local AI runs.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@MervinPraison MervinPraison merged commit b9a5784 into main May 28, 2025
7 of 9 checks passed
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request significantly improves the Ollama integration by adding support for more standard and flexible environment variable configurations, which is a great enhancement for usability. The documentation updates are also very helpful.

I've identified a key area for improvement regarding how OLLAMA_API_BASE is processed to ensure it correctly points to the /v1 API endpoint, and a suggestion for future maintainability regarding duplicated logic. Addressing the OLLAMA_API_BASE handling will be important for the reliability of this feature when using the community-recommended pattern.

Summary of Findings

  • OLLAMA_API_BASE Path Handling: The OLLAMA_API_BASE environment variable, if set to a URL like http://localhost:11434 (as documented in one pattern), might not include the required /v1 suffix. The current code would use this URL as-is, potentially leading to connection errors with the OpenAI SDK which expects the full versioned path. This affects both auto.py and cli.py.
  • Code Duplication: The logic for determining model_name and base_url from environment variables is duplicated in src/praisonai/praisonai/auto.py and src/praisonai/praisonai/cli.py. Consolidating this into a shared utility function could improve maintainability. (Severity: medium, not commented inline due to settings, but noted for future improvement).

Merge Readiness

The pull request introduces valuable flexibility for Ollama integration. However, there's a high-severity issue concerning the handling of the OLLAMA_API_BASE variable that needs to be addressed to ensure correct functionality when users follow the documented 'Community recommended pattern'. I've provided suggestions for this in auto.py and cli.py.

Once these changes are made, the PR should be in a much better state for merging. As I am an AI, I am not authorized to approve pull requests; please ensure further review and approval from team members before merging.

Comment on lines +100 to +104
base_url = (
os.environ.get("OPENAI_BASE_URL") or
os.environ.get("OPENAI_API_BASE") or
os.environ.get("OLLAMA_API_BASE", "https://api.openai.com/v1")
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current logic for base_url correctly prioritizes environment variables. However, if OLLAMA_API_BASE is set (e.g., to http://localhost:11434 as per the 'Community recommended pattern' in the documentation) and does not include the /v1 suffix, this value will be used as is. The OpenAI SDK typically requires the base_url to point to the versioned API endpoint (e.g., http://localhost:11434/v1).

To ensure this works correctly, especially with the documented OLLAMA_API_BASE pattern, would it be better to process OLLAMA_API_BASE to ensure it includes /v1 if it's the chosen variable and doesn't already have it? This would make the integration more robust.

        # Process OLLAMA_API_BASE to ensure it includes /v1 if set
        ollama_api_base_env = os.environ.get("OLLAMA_API_BASE")
        processed_ollama_api_base = None
        if ollama_api_base_env:
            # Ensure the URL is correctly formatted for /v1 endpoint
            temp_url = ollama_api_base_env.rstrip('/')
            if not temp_url.endswith('/v1'):
                processed_ollama_api_base = temp_url + '/v1'
            else:
                processed_ollama_api_base = temp_url

        base_url = (
            os.environ.get("OPENAI_BASE_URL") or 
            os.environ.get("OPENAI_API_BASE") or
            processed_ollama_api_base or # Use the processed value
            "https://api.openai.com/v1" # Default if all others are None
        )

Comment on lines +123 to +127
base_url = (
os.environ.get("OPENAI_BASE_URL") or
os.environ.get("OPENAI_API_BASE") or
os.environ.get("OLLAMA_API_BASE", "https://api.openai.com/v1")
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Similar to the comment in auto.py, the handling of OLLAMA_API_BASE here could lead to issues if it's provided without the /v1 suffix (e.g., http://localhost:11434). The OpenAI SDK generally expects the full versioned API path.

To ensure robustness, especially aligning with the 'Community recommended pattern' in the documentation for OLLAMA_API_BASE, should we process this variable to append /v1 if it's chosen and missing the suffix?

Suggested change
base_url = (
os.environ.get("OPENAI_BASE_URL") or
os.environ.get("OPENAI_API_BASE") or
os.environ.get("OLLAMA_API_BASE", "https://api.openai.com/v1")
)
# Process OLLAMA_API_BASE to ensure it includes /v1 if set
ollama_api_base_env = os.environ.get("OLLAMA_API_BASE")
processed_ollama_api_base = None
if ollama_api_base_env:
# Ensure the URL is correctly formatted for /v1 endpoint
temp_url = ollama_api_base_env.rstrip('/')
if not temp_url.endswith('/v1'):
processed_ollama_api_base = temp_url + '/v1'
else:
processed_ollama_api_base = temp_url
base_url = (
os.environ.get("OPENAI_BASE_URL") or
os.environ.get("OPENAI_API_BASE") or
processed_ollama_api_base or # Use the processed value
"https://api.openai.com/v1" # Default if all others are None
)

Comment thread docs/models/ollama.mdx
**Alternative Environment Variable Patterns:**
```bash
# Community recommended pattern
export OLLAMA_API_BASE=http://localhost:11434
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

It's good that you've documented the OLLAMA_API_BASE pattern like this (http://localhost:11434).

Could you confirm if the intention is that users provide the base URL without the /v1 suffix for this specific variable?

If so, the Python code that consumes this variable needs to ensure /v1 is appended before use, as the OpenAI client typically expects the full path to the versioned API endpoint (e.g., http://localhost:11434/v1). I've added specific suggestions in auto.py and cli.py to handle this to ensure this documentation aligns with robust code behavior.

@MervinPraison MervinPraison deleted the claude/issue-394-20250528_143330 branch June 3, 2025 06:24
shaneholloman pushed a commit to shaneholloman/praisonai that referenced this pull request Feb 4, 2026
…394-20250528_143330

fix: Support OPENAI_BASE_URL and MODEL_NAME for Ollama integration
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

praisonai insist using openai and cannot use the local model from ollama

1 participant