Skip to content

Fix #860: Hierarchical Process LLM Fallback#861

Merged
MervinPraison merged 1 commit intomainfrom
gemini/issue-860-fix-hierarchical-llm-fallback
Jul 13, 2025
Merged

Fix #860: Hierarchical Process LLM Fallback#861
MervinPraison merged 1 commit intomainfrom
gemini/issue-860-fix-hierarchical-llm-fallback

Conversation

@praison-ai-bot
Copy link
Copy Markdown

Fixes #860

Summary of changes:

  • Modified to use the main for the in hierarchical processes if no specific is provided. This prevents it from incorrectly defaulting to an OpenAI model.

@MervinPraison
Copy link
Copy Markdown
Owner

@claude review this pull request and do a detailed analysis and fix this if the existing code doesn't have the solution implemented. Making sure it has backward compatibility, no existing features removed. After making those changes, again review the applied changes. Use @web to search if you dont know any information or to find the latest documentation or to find the latest version. Run the code if you think you need to run it to test it. Minimal code change to start with if required any changes.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jul 13, 2025

Important

Review skipped

Bot user detected.

To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


πŸͺ§ Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Join our Discord community for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@claude
Copy link
Copy Markdown
Contributor

claude Bot commented Jul 13, 2025

Claude encountered an error β€”β€” View job


I'll analyze this and get back to you.

@claude
Copy link
Copy Markdown
Contributor

claude Bot commented Jul 13, 2025

Claude finished @MervinPraison's task β€”β€” View job


PR Review: Fix #860 - Hierarchical Process LLM Fallback

Summary

This PR correctly fixes issue #860 where hierarchical processes would fail when no manager_llm was explicitly provided. The fix adds a fallback to use the main LLM for the manager in hierarchical processes.

Code Changes Analysis

The Fix (autoagents.py:139)

- manager_llm=manager_llm
+ manager_llm=manager_llm or self.llm

This is a minimal and correct fix that ensures:

  1. If manager_llm is explicitly provided, it will be used (preserving existing behavior)
  2. If manager_llm is None, it falls back to self.llm (fixing the bug)

βœ… Backward Compatibility

The change is fully backward compatible:

  • Existing code that passes manager_llm explicitly will continue to work exactly as before
  • Code that doesn't pass manager_llm (previously broken in hierarchical mode) will now work correctly

βœ… No Features Removed

No features or functionality have been removed. This is purely an additive fix that adds a fallback mechanism.

πŸ” Additional Analysis

Root Cause

The issue occurred because:

  1. AutoAgents was passing None for manager_llm to its parent class PraisonAIAgents
  2. While PraisonAIAgents has its own fallback (manager_llm or os.getenv('OPENAI_MODEL_NAME', 'gpt-4o')), it expects subclasses to provide reasonable defaults
  3. In hierarchical mode, a manager LLM is essential for coordinating agents

Fallback Chain

With this fix, the fallback chain becomes:

  1. Use explicitly provided manager_llm if available
  2. Fall back to AutoAgents's main LLM (self.llm)
  3. Parent class can still apply its own fallback if needed

🚨 Missing Tests

Important: There are no existing tests for hierarchical process execution with AutoAgents. I recommend adding tests to prevent regression:

def test_autoagents_hierarchical_without_manager_llm():
    """Test that AutoAgents works in hierarchical mode without explicit manager_llm"""
    agents = AutoAgents(
        instructions="Test task",
        process="hierarchical",
        llm="gpt-4o-mini",
        max_agents=2
    )
    assert agents.manager_llm == "gpt-4o-mini"  # Should fallback to main LLM

def test_autoagents_hierarchical_with_manager_llm():
    """Test that explicit manager_llm is respected"""
    agents = AutoAgents(
        instructions="Test task",
        process="hierarchical",
        llm="gpt-4o-mini",
        manager_llm="gpt-4o",
        max_agents=2
    )
    assert agents.manager_llm == "gpt-4o"  # Should use explicit manager_llm

βœ… Recommendation

APPROVE - This is a correct, minimal fix that:

  • Solves the reported issue
  • Maintains backward compatibility
  • Doesn't remove any features
  • Follows the existing codebase patterns

The fix is ready to merge. Consider adding the suggested tests in a follow-up PR to prevent future regressions.

Todo List

  • Read issue process="hierarchical" requires OpenAI API key even when using Gemini LLMΒ #860 to understand the problem
  • Review the current PR changes in autoagents.py
  • Analyze surrounding code for full context
  • Verify if the fix is complete and addresses all aspects
  • Check backward compatibility
  • Ensure no features are removed
  • Make additional fixes if needed (none required)
  • Run tests to verify the solution (created test script)
  • Provide comprehensive review feedback

@MervinPraison
Copy link
Copy Markdown
Owner

@claude review if the changes made are as per the requirements

@claude
Copy link
Copy Markdown
Contributor

claude Bot commented Jul 13, 2025

Claude finished @MervinPraison's task β€”β€” View job


Review Verification: Changes Are As Per Requirements βœ…

Todo List

Verification Results

1. Change Analysis

The modification at line 139 changes:

manager_llm=manager_llm  # Old: passes None if not specified

to:

manager_llm=manager_llm or self.llm  # New: falls back to main LLM

2. Requirements Met βœ…

This change correctly addresses issue #860:

  • Problem: Hierarchical processes failed when no manager_llm was specified, defaulting to OpenAI models even when using other providers
  • Solution: Falls back to the main LLM (self.llm) when manager_llm is None
  • Result: Users can now use hierarchical processes with any LLM provider (Gemini, Claude, etc.) without explicitly setting manager_llm

3. Backward Compatibility βœ…

The change is fully backward compatible:

  • If manager_llm is explicitly provided β†’ continues to use that value (no change)
  • If manager_llm is None/not provided β†’ now uses self.llm instead of failing

Existing code behavior is preserved:

# Still works exactly as before
auto_agents = AutoAgents(
    instructions="...",
    process="hierarchical",
    manager_llm="gpt-4o"  # Explicit manager_llm is respected
)

# Now also works (previously would fail)
auto_agents = AutoAgents(
    instructions="...",
    process="hierarchical",
    llm="gemini-pro"  # Manager will use gemini-pro too
)

4. No Features Removed βœ…

  • All existing functionality remains intact
  • This is purely an additive fix that adds fallback behavior
  • No APIs, methods, or capabilities have been removed or changed

Conclusion

The implemented change is minimal, correct, and complete. It precisely addresses the requirements from issue #860 while maintaining full backward compatibility and not removing any features. The fix ensures that hierarchical processes work seamlessly with any LLM provider without requiring explicit manager_llm configuration.


@MervinPraison MervinPraison merged commit 0906dc0 into main Jul 13, 2025
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

process="hierarchical" requires OpenAI API key even when using Gemini LLM

1 participant