Skip to content

Develop#510

Merged
MervinPraison merged 4 commits intomainfrom
develop
May 24, 2025
Merged

Develop#510
MervinPraison merged 4 commits intomainfrom
develop

Conversation

@MervinPraison
Copy link
Copy Markdown
Owner

@MervinPraison MervinPraison commented May 24, 2025

Summary by CodeRabbit

  • Bug Fixes

    • Improved reliability of command-line tests by ensuring subprocesses explicitly inherit environment variables.
    • Enhanced model initialization to use configuration values directly, avoiding reliance on environment variables for API keys.
  • Chores

    • Upgraded the praisonai package version to 2.2.11 across all Dockerfiles, documentation, and deployment scripts.
    • Added advanced debugging and diagnostic steps to the continuous integration workflow for better visibility into configuration and framework selection during tests.
    • Updated project version to 2.2.11 in configuration files.
  • Documentation

    • Updated documentation to reflect the new praisonai package version (2.2.11) in setup instructions.

…tHub Actions workflow

- Introduced a new environment variable for log level and additional steps to search for the 'Researcher' role in YAML files, enhancing visibility during the workflow execution.
- Added a step to trace the AutoGen execution path, providing insights into framework decisions and available roles.
- Ensured minimal changes to existing code while improving the debugging and testing process for configuration and role management.
…ctions workflow

- Introduced steps to back up and restore `agents.yaml` and `tools.py` during the workflow execution, preventing interference and ensuring a clean environment for tests.
- Added comprehensive execution debug step to provide detailed insights into the execution path and configuration status.
- Ensured minimal changes to existing code while enhancing the debugging and testing process for configuration management.
… workflow

- Updated echo statements to provide clearer context regarding the backup of `agents.yaml` and `tools.py`, preventing default file resolution interference during tests.
- Ensured minimal changes to existing code while improving clarity and user understanding of the backup process.
- Incremented PraisonAI version from 2.2.10 to 2.2.11 in `pyproject.toml`, `uv.lock`, and all relevant Dockerfiles for consistency.
- Ensured minimal changes to existing code while maintaining versioning accuracy across the project.
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 24, 2025

Caution

Review failed

The pull request is closed.

Walkthrough

This update upgrades the praisonai package version from 2.2.10 to 2.2.11 across all Dockerfiles, documentation, and deployment scripts. It introduces enhanced debugging and diagnostic steps to the CI workflow, modifies LLM instantiation in the agent generator for explicit API key handling, and ensures environment variables are correctly passed in tests.

Changes

File(s) Change Summary
docker/Dockerfile, docker/Dockerfile.chat, docker/Dockerfile.dev, docker/Dockerfile.ui Updated praisonai package version from 2.2.10 to 2.2.11.
docs/api/praisonai/deploy.html, praisonai/deploy.py Updated Dockerfile creation logic to use praisonai 2.2.11.
docs/developers/local-development.mdx, docs/ui/chat.mdx, docs/ui/code.mdx Updated documentation to reference praisonai 2.2.11 in Dockerfile instructions.
pyproject.toml Bumped project version from 2.2.10 to 2.2.11 in metadata.
.github/workflows/unittest.yml Added debug steps, configuration backups, and diagnostics for roles and framework detection in CI.
praisonai/agents_generator.py Modified _run_crewai to explicitly set LLM API keys and base URLs from config, avoiding env lookups.
tests/test.py Ensured subprocess tests inherit environment variables explicitly.

Sequence Diagram(s)

sequenceDiagram
    participant CI as GitHub Actions CI
    participant FS as File System
    participant Tester as Test Runner
    participant Debug as Diagnostic Steps

    CI->>FS: Backup root agents.yaml and tools.py
    CI->>Debug: Scan YAML files for "researcher" role
    CI->>Debug: Trace PraisonAI and AgentsGenerator framework logic
    CI->>Tester: Run tests
    CI->>FS: Restore root configuration files
Loading
sequenceDiagram
    participant AgentsGenerator
    participant Config
    participant PraisonAIModel

    AgentsGenerator->>Config: Read config_list[0] for api_key and base_url
    AgentsGenerator->>PraisonAIModel: Instantiate with api_key_var=None, base_url from config
    AgentsGenerator->>PraisonAIModel: If api_key in config, override model.api_key
Loading

Possibly related PRs

Poem

A hop and a skip, a version anew,
PraisonAI’s number now ends with a two!
Dockerfiles freshened, docs in the queue,
Debugging made clearer for the testing crew.
With keys set explicit and CI in sight,
The code hops forward—everything’s right!
🐰✨


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between dfb62e8 and 9b8c5f6.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (13)
  • .github/workflows/unittest.yml (5 hunks)
  • docker/Dockerfile (1 hunks)
  • docker/Dockerfile.chat (1 hunks)
  • docker/Dockerfile.dev (1 hunks)
  • docker/Dockerfile.ui (1 hunks)
  • docs/api/praisonai/deploy.html (1 hunks)
  • docs/developers/local-development.mdx (1 hunks)
  • docs/ui/chat.mdx (1 hunks)
  • docs/ui/code.mdx (1 hunks)
  • praisonai/agents_generator.py (1 hunks)
  • praisonai/deploy.py (1 hunks)
  • pyproject.toml (2 hunks)
  • tests/test.py (2 hunks)
✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@MervinPraison MervinPraison merged commit 2e63a14 into main May 24, 2025
7 of 11 checks passed
@netlify
Copy link
Copy Markdown

netlify Bot commented May 24, 2025

Deploy Preview for praisonai ready!

Name Link
🔨 Latest commit 9b8c5f6
🔍 Latest deploy log https://app.netlify.com/projects/praisonai/deploys/68323e1dc516e20008d091a5
😎 Deploy Preview https://deploy-preview-510--praisonai.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @MervinPraison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

Summary of Changes

Hello team,

Gemini here, providing a summary of this pull request to help everyone quickly understand the changes and activity.

This PR primarily focuses on updating the praisonai package version across the project from 2.2.10 to 2.2.11. This update is reflected in various Dockerfiles, documentation snippets, project configuration files (pyproject.toml, uv.lock), and deployment scripts. Additionally, the PR includes a specific enhancement to the CrewAI integration within praisonai/agents_generator.py to improve how LLM API keys and base URLs are handled, allowing configuration via the config_list. A fix is also included in the test suite to ensure environment variables are correctly passed to subprocesses.

Highlights

  • Version Bump: The core praisonai package version has been updated from 2.2.10 to 2.2.11 across the project.
  • Dependency Updates: Dockerfiles (Dockerfile, Dockerfile.chat, Dockerfile.dev, Dockerfile.ui) and deployment scripts (praisonai/deploy.py) have been updated to install the new 2.2.11 version.
  • Documentation Sync: Documentation files (docs/api/praisonai/deploy.html, docs/developers/local-development.mdx, docs/ui/chat.mdx, docs/ui/code.mdx) containing installation or deployment instructions have been updated to reference the 2.2.11 version.
  • CrewAI LLM Configuration Improvement: The _run_crewai method in praisonai/agents_generator.py has been modified to explicitly handle API keys and base URLs from the config_list when initializing PraisonAIModel instances for CrewAI agents, providing more control over LLM configuration.
  • Test Environment Fix: The test suite (tests/test.py) now ensures that environment variables are inherited by subprocess calls, fixing potential issues where tests relying on environment variables might fail.

Changelog

Click here to see the changelog
  • docker/Dockerfile
    • Updated praisonai version from 2.2.10 to 2.2.11 in the pip install command (line 4).
  • docker/Dockerfile.chat
    • Updated praisonai version from 2.2.10 to 2.2.11 in the pip install command (line 16).
  • docker/Dockerfile.dev
    • Updated praisonai version from 2.2.10 to 2.2.11 in the pip install command (line 18).
  • docker/Dockerfile.ui
    • Updated praisonai version from 2.2.10 to 2.2.11 in the pip install command (line 16).
  • docs/api/praisonai/deploy.html
    • Updated praisonai version from 2.2.10 to 2.2.11 in the generated Dockerfile content (line 113).
  • docs/developers/local-development.mdx
    • Updated praisonai version from 2.2.10 to 2.2.11 in the pip install command snippet (line 30).
  • docs/ui/chat.mdx
    • Updated praisonai version from 2.2.10 to 2.2.11 in the pip install command snippet (line 158).
  • docs/ui/code.mdx
    • Updated praisonai version from 2.2.10 to 2.2.11 in the pip install command snippet (line 211).
  • praisonai/agents_generator.py
    • Modified _run_crewai to pass api_key_var=None and base_url from config_list to PraisonAIModel (lines 441-443, 449-451, 461-463, 469-471).
    • Added logic to explicitly set llm.api_key and function_calling_llm.api_key from config_list if available (lines 445-446, 453-454, 465-466, 473-474).
  • praisonai/deploy.py
    • Updated praisonai version from 2.2.10 to 2.2.11 in the generated Dockerfile content (line 59).
  • pyproject.toml
    • Updated project version from 2.2.10 to 2.2.11 in the [project] section (line 3).
    • Updated project version from 2.2.10 to 2.2.11 in the [tool.poetry] section (line 92).
  • tests/test.py
    • Added import os (line 3).
    • Modified run_command to inherit environment variables using env=os.environ.copy() for subprocess calls (line 84).
  • uv.lock
    • Updated the locked version of praisonai from 2.2.10 to 2.2.11 (line 3617).
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


A version bumps,
Code flows like pumps,
Tests now inherit env,
CrewAI keys are sent,
No more env var humps.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request primarily updates the praisonai dependency to version 2.2.11 across various Dockerfiles, documentation files, and project configuration. It also includes a refinement in praisonai/agents_generator.py for how LLM models are configured, particularly concerning API keys and base URLs, and a helpful fix in tests/test.py to ensure environment variables are correctly passed to subprocesses.

Overall, the version bumps are consistent, and the test fix is a good improvement. The main area for code enhancement is in praisonai/agents_generator.py regarding code duplication.

A more descriptive pull request title and body would be beneficial for future context and easier understanding of the changes.

Summary of Findings

  • Code Duplication in LLM Configuration: In praisonai/agents_generator.py, the logic for initializing and configuring both the primary LLM and the function calling LLM is duplicated across if/else blocks. This impacts maintainability and readability. Refactoring into a shared helper method is recommended.

Merge Readiness

This pull request includes necessary version updates and a good test fix. However, there's a medium severity maintainability issue in praisonai/agents_generator.py due to code duplication. It's recommended to address this refactoring opportunity to improve the long-term health of the codebase before merging.

As an AI, I am not authorized to approve pull requests. Please ensure further review and approval from authorized team members before merging.

Comment on lines 439 to +474
llm = PraisonAIModel(
model=llm_model.get("model") or os.environ.get("MODEL_NAME") or "openai/gpt-4o",
api_key_var=None, # Don't rely on env var lookup
base_url=self.config_list[0].get('base_url') if self.config_list else None
).get_model()
# Override with explicit API key from config_list
if self.config_list and self.config_list[0].get('api_key'):
llm.api_key = self.config_list[0]['api_key']
else:
llm = PraisonAIModel().get_model()
llm = PraisonAIModel(
api_key_var=None, # Don't rely on env var lookup
base_url=self.config_list[0].get('base_url') if self.config_list else None
).get_model()
# Override with explicit API key from config_list
if self.config_list and self.config_list[0].get('api_key'):
llm.api_key = self.config_list[0]['api_key']

# Configure function calling LLM
function_calling_llm_model = details.get('function_calling_llm')
if function_calling_llm_model:
function_calling_llm = PraisonAIModel(
model=function_calling_llm_model.get("model") or os.environ.get("MODEL_NAME") or "openai/gpt-4o",
api_key_var=None, # Don't rely on env var lookup
base_url=self.config_list[0].get('base_url') if self.config_list else None
).get_model()
# Override with explicit API key from config_list
if self.config_list and self.config_list[0].get('api_key'):
function_calling_llm.api_key = self.config_list[0]['api_key']
else:
function_calling_llm = PraisonAIModel().get_model()
function_calling_llm = PraisonAIModel(
api_key_var=None, # Don't rely on env var lookup
base_url=self.config_list[0].get('base_url') if self.config_list else None
).get_model()
# Override with explicit API key from config_list
if self.config_list and self.config_list[0].get('api_key'):
function_calling_llm.api_key = self.config_list[0]['api_key']
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for configuring the llm and function_calling_llm (both the if and else blocks for each) is highly repetitive. This duplication makes the code harder to maintain, as any changes to this logic would need to be applied in four similar places.

Could we consider refactoring this into a helper method? Such a method could take the model details (e.g., details.get('llm') or details.get('function_calling_llm')) and a default model name as input, and return the configured LLM instance. This would encapsulate the logic for:

  1. Determining the model name to use (from details, environment, or default).
  2. Extracting base_url and api_key from self.config_list[0] (if self.config_list is available).
  3. Instantiating PraisonAIModel with api_key_var=None (to explicitly manage API key sourcing) and the extracted base_url.
  4. Overriding llm_instance.api_key with the key from self.config_list if present.

This would significantly improve readability and maintainability. The current approach of setting api_key_var=None and then overriding the API key is a clear way to prioritize config_list, which is good, but abstracting the repeated steps would be beneficial.

shaneholloman pushed a commit to shaneholloman/praisonai that referenced this pull request Feb 4, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant