Conversation
…tHub Actions workflow - Introduced a new environment variable for log level and additional steps to search for the 'Researcher' role in YAML files, enhancing visibility during the workflow execution. - Added a step to trace the AutoGen execution path, providing insights into framework decisions and available roles. - Ensured minimal changes to existing code while improving the debugging and testing process for configuration and role management.
…ctions workflow - Introduced steps to back up and restore `agents.yaml` and `tools.py` during the workflow execution, preventing interference and ensuring a clean environment for tests. - Added comprehensive execution debug step to provide detailed insights into the execution path and configuration status. - Ensured minimal changes to existing code while enhancing the debugging and testing process for configuration management.
… workflow - Updated echo statements to provide clearer context regarding the backup of `agents.yaml` and `tools.py`, preventing default file resolution interference during tests. - Ensured minimal changes to existing code while improving clarity and user understanding of the backup process.
- Incremented PraisonAI version from 2.2.10 to 2.2.11 in `pyproject.toml`, `uv.lock`, and all relevant Dockerfiles for consistency. - Ensured minimal changes to existing code while maintaining versioning accuracy across the project.
|
Caution Review failedThe pull request is closed. WalkthroughThis update upgrades the Changes
Sequence Diagram(s)sequenceDiagram
participant CI as GitHub Actions CI
participant FS as File System
participant Tester as Test Runner
participant Debug as Diagnostic Steps
CI->>FS: Backup root agents.yaml and tools.py
CI->>Debug: Scan YAML files for "researcher" role
CI->>Debug: Trace PraisonAI and AgentsGenerator framework logic
CI->>Tester: Run tests
CI->>FS: Restore root configuration files
sequenceDiagram
participant AgentsGenerator
participant Config
participant PraisonAIModel
AgentsGenerator->>Config: Read config_list[0] for api_key and base_url
AgentsGenerator->>PraisonAIModel: Instantiate with api_key_var=None, base_url from config
AgentsGenerator->>PraisonAIModel: If api_key in config, override model.api_key
Possibly related PRs
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (13)
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
✅ Deploy Preview for praisonai ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
There was a problem hiding this comment.
Hello @MervinPraison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
Hello team,
Gemini here, providing a summary of this pull request to help everyone quickly understand the changes and activity.
This PR primarily focuses on updating the praisonai package version across the project from 2.2.10 to 2.2.11. This update is reflected in various Dockerfiles, documentation snippets, project configuration files (pyproject.toml, uv.lock), and deployment scripts. Additionally, the PR includes a specific enhancement to the CrewAI integration within praisonai/agents_generator.py to improve how LLM API keys and base URLs are handled, allowing configuration via the config_list. A fix is also included in the test suite to ensure environment variables are correctly passed to subprocesses.
Highlights
- Version Bump: The core
praisonaipackage version has been updated from2.2.10to2.2.11across the project. - Dependency Updates: Dockerfiles (
Dockerfile,Dockerfile.chat,Dockerfile.dev,Dockerfile.ui) and deployment scripts (praisonai/deploy.py) have been updated to install the new2.2.11version. - Documentation Sync: Documentation files (
docs/api/praisonai/deploy.html,docs/developers/local-development.mdx,docs/ui/chat.mdx,docs/ui/code.mdx) containing installation or deployment instructions have been updated to reference the2.2.11version. - CrewAI LLM Configuration Improvement: The
_run_crewaimethod inpraisonai/agents_generator.pyhas been modified to explicitly handle API keys and base URLs from theconfig_listwhen initializingPraisonAIModelinstances for CrewAI agents, providing more control over LLM configuration. - Test Environment Fix: The test suite (
tests/test.py) now ensures that environment variables are inherited by subprocess calls, fixing potential issues where tests relying on environment variables might fail.
Changelog
Click here to see the changelog
- docker/Dockerfile
- Updated
praisonaiversion from2.2.10to2.2.11in thepip installcommand (line 4).
- Updated
- docker/Dockerfile.chat
- Updated
praisonaiversion from2.2.10to2.2.11in thepip installcommand (line 16).
- Updated
- docker/Dockerfile.dev
- Updated
praisonaiversion from2.2.10to2.2.11in thepip installcommand (line 18).
- Updated
- docker/Dockerfile.ui
- Updated
praisonaiversion from2.2.10to2.2.11in thepip installcommand (line 16).
- Updated
- docs/api/praisonai/deploy.html
- Updated
praisonaiversion from2.2.10to2.2.11in the generated Dockerfile content (line 113).
- Updated
- docs/developers/local-development.mdx
- Updated
praisonaiversion from2.2.10to2.2.11in thepip installcommand snippet (line 30).
- Updated
- docs/ui/chat.mdx
- Updated
praisonaiversion from2.2.10to2.2.11in thepip installcommand snippet (line 158).
- Updated
- docs/ui/code.mdx
- Updated
praisonaiversion from2.2.10to2.2.11in thepip installcommand snippet (line 211).
- Updated
- praisonai/agents_generator.py
- Modified
_run_crewaito passapi_key_var=Noneandbase_urlfromconfig_listtoPraisonAIModel(lines 441-443, 449-451, 461-463, 469-471). - Added logic to explicitly set
llm.api_keyandfunction_calling_llm.api_keyfromconfig_listif available (lines 445-446, 453-454, 465-466, 473-474).
- Modified
- praisonai/deploy.py
- Updated
praisonaiversion from2.2.10to2.2.11in the generated Dockerfile content (line 59).
- Updated
- pyproject.toml
- Updated project version from
2.2.10to2.2.11in the[project]section (line 3). - Updated project version from
2.2.10to2.2.11in the[tool.poetry]section (line 92).
- Updated project version from
- tests/test.py
- Added
import os(line 3). - Modified
run_commandto inherit environment variables usingenv=os.environ.copy()for subprocess calls (line 84).
- Added
- uv.lock
- Updated the locked version of
praisonaifrom2.2.10to2.2.11(line 3617).
- Updated the locked version of
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
A version bumps,
Code flows like pumps,
Tests now inherit env,
CrewAI keys are sent,
No more env var humps.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This pull request primarily updates the praisonai dependency to version 2.2.11 across various Dockerfiles, documentation files, and project configuration. It also includes a refinement in praisonai/agents_generator.py for how LLM models are configured, particularly concerning API keys and base URLs, and a helpful fix in tests/test.py to ensure environment variables are correctly passed to subprocesses.
Overall, the version bumps are consistent, and the test fix is a good improvement. The main area for code enhancement is in praisonai/agents_generator.py regarding code duplication.
A more descriptive pull request title and body would be beneficial for future context and easier understanding of the changes.
Summary of Findings
- Code Duplication in LLM Configuration: In
praisonai/agents_generator.py, the logic for initializing and configuring both the primary LLM and the function calling LLM is duplicated acrossif/elseblocks. This impacts maintainability and readability. Refactoring into a shared helper method is recommended.
Merge Readiness
This pull request includes necessary version updates and a good test fix. However, there's a medium severity maintainability issue in praisonai/agents_generator.py due to code duplication. It's recommended to address this refactoring opportunity to improve the long-term health of the codebase before merging.
As an AI, I am not authorized to approve pull requests. Please ensure further review and approval from authorized team members before merging.
| llm = PraisonAIModel( | ||
| model=llm_model.get("model") or os.environ.get("MODEL_NAME") or "openai/gpt-4o", | ||
| api_key_var=None, # Don't rely on env var lookup | ||
| base_url=self.config_list[0].get('base_url') if self.config_list else None | ||
| ).get_model() | ||
| # Override with explicit API key from config_list | ||
| if self.config_list and self.config_list[0].get('api_key'): | ||
| llm.api_key = self.config_list[0]['api_key'] | ||
| else: | ||
| llm = PraisonAIModel().get_model() | ||
| llm = PraisonAIModel( | ||
| api_key_var=None, # Don't rely on env var lookup | ||
| base_url=self.config_list[0].get('base_url') if self.config_list else None | ||
| ).get_model() | ||
| # Override with explicit API key from config_list | ||
| if self.config_list and self.config_list[0].get('api_key'): | ||
| llm.api_key = self.config_list[0]['api_key'] | ||
|
|
||
| # Configure function calling LLM | ||
| function_calling_llm_model = details.get('function_calling_llm') | ||
| if function_calling_llm_model: | ||
| function_calling_llm = PraisonAIModel( | ||
| model=function_calling_llm_model.get("model") or os.environ.get("MODEL_NAME") or "openai/gpt-4o", | ||
| api_key_var=None, # Don't rely on env var lookup | ||
| base_url=self.config_list[0].get('base_url') if self.config_list else None | ||
| ).get_model() | ||
| # Override with explicit API key from config_list | ||
| if self.config_list and self.config_list[0].get('api_key'): | ||
| function_calling_llm.api_key = self.config_list[0]['api_key'] | ||
| else: | ||
| function_calling_llm = PraisonAIModel().get_model() | ||
| function_calling_llm = PraisonAIModel( | ||
| api_key_var=None, # Don't rely on env var lookup | ||
| base_url=self.config_list[0].get('base_url') if self.config_list else None | ||
| ).get_model() | ||
| # Override with explicit API key from config_list | ||
| if self.config_list and self.config_list[0].get('api_key'): | ||
| function_calling_llm.api_key = self.config_list[0]['api_key'] |
There was a problem hiding this comment.
The logic for configuring the llm and function_calling_llm (both the if and else blocks for each) is highly repetitive. This duplication makes the code harder to maintain, as any changes to this logic would need to be applied in four similar places.
Could we consider refactoring this into a helper method? Such a method could take the model details (e.g., details.get('llm') or details.get('function_calling_llm')) and a default model name as input, and return the configured LLM instance. This would encapsulate the logic for:
- Determining the model name to use (from details, environment, or default).
- Extracting
base_urlandapi_keyfromself.config_list[0](ifself.config_listis available). - Instantiating
PraisonAIModelwithapi_key_var=None(to explicitly manage API key sourcing) and the extractedbase_url. - Overriding
llm_instance.api_keywith the key fromself.config_listif present.
This would significantly improve readability and maintainability. The current approach of setting api_key_var=None and then overriding the API key is a clear way to prioritize config_list, which is good, but abstracting the repeated steps would be beneficial.
Summary by CodeRabbit
Bug Fixes
Chores
Documentation