You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Gap #2: Provider-Specific Dispatch Scattered Across Core
Scope: src/praisonai-agents/praisonaiagents/llm/, agent/ Priority: High (depends on Gap #1) Related: Part of issue #1302 architectural gaps
Problem Statement
The core contains inline if provider == X / elif provider == Y branching on model strings in dozens of hot-path locations instead of delegating to provider adapters. This creates feature-flag-style bloat that violates protocol-driven architecture and makes adding new providers require core edits.
Current Architecture Issues
Provider Branching Proliferation
In llm/llm.py alone:
262 case-insensitive occurrences of gemini|claude|ollama|anthropic|openai|gpt
40+ branching locations in chat loops
Key hotspots (line numbers from llm/llm.py):
479if self._is_ollama_provider():
493if self.model.startswith("ollama/"):
1152if is_ollama:
1273if not (self._is_ollama_provider() and iteration_count >= self.OLLAMA_SUMMARY_ITERATION_THRESHOLD):
1326if self.model.startswith("claude-"):
1330if any(self.model.startswith(prefix) for prefix in ["gemini-", "gemini/"]):
1383if self.prompt_caching and self._supports_prompt_caching() and self._is_anthropic_model():
1551if tool_name in gemini_internal_tools:
2031if use_streaming and formatted_tools and self._is_gemini_model():
Requires Gap Github actions fix #1 completion: Dual execution paths make this refactoring complex
Enables Gap Main #3: Clean provider abstraction helps with memory/knowledge adapter patterns
This issue is part of the larger architectural refactoring outlined in #1302. Provider adapter protocol eliminates feature-flag bloat and enables third-party provider extensions.
Gap #2: Provider-Specific Dispatch Scattered Across Core
Scope:
src/praisonai-agents/praisonaiagents/llm/,agent/Priority: High (depends on Gap #1)
Related: Part of issue #1302 architectural gaps
Problem Statement
The core contains inline
if provider == X / elif provider == Ybranching on model strings in dozens of hot-path locations instead of delegating to provider adapters. This creates feature-flag-style bloat that violates protocol-driven architecture and makes adding new providers require core edits.Current Architecture Issues
Provider Branching Proliferation
In
llm/llm.pyalone:gemini|claude|ollama|anthropic|openai|gptKey hotspots (line numbers from
llm/llm.py):479if self._is_ollama_provider():493if self.model.startswith("ollama/"):1152if is_ollama:1273if not (self._is_ollama_provider() and iteration_count >= self.OLLAMA_SUMMARY_ITERATION_THRESHOLD):1326if self.model.startswith("claude-"):1330if any(self.model.startswith(prefix) for prefix in ["gemini-", "gemini/"]):1383if self.prompt_caching and self._supports_prompt_caching() and self._is_anthropic_model():1551if tool_name in gemini_internal_tools:2031if use_streaming and formatted_tools and self._is_gemini_model():Sync/Async Duplication Compounds the Problem
Provider logic appears in both execution paths:
2382,2486,2540,2564,2583,2618,2667,2689)3235,3690,3712,3725,3746,3761,3782,3904,3924)Beyond LLM Layer
Pattern repeats across core modules:
agent/deep_research_agent.py:if provider == "litellm" / elif provider == "gemini"at~301-304,1181-1190,1249-1258,1340,1383,1427agent/chat_mixin.py: tool conversion checks withhasattr(tool, "to_openai_tool")Architecture Violations
llm.py,deep_research_agent.py,chat_mixin.pyProposed Solution
Introduce
LLMProviderAdapterprotocol with per-provider hooks:Architecture:
Implementation Strategy
self._adapter.method()callsif provider ==logic in hot pathsExample Transformation
Before:
After:
Success Criteria
llm/llm.pyreplaced with adapter callsImplementation Files
Core Protocol:
llm/protocols.py- AddLLMProviderAdapterprotocolllm/adapters/__init__.py- Adapter registry and base classesProvider Adapters:
llm/adapters/ollama.py- Ollama-specific behaviorsllm/adapters/anthropic.py- Claude/prompt caching featuresllm/adapters/gemini.py- Internal tools, streaming quirksllm/adapters/openai.py- Baseline adapterllm/adapters/litellm.py- Fallback for unknown providersCore Refactoring:
llm/llm.py- Remove all provider branches, use adapter delegationagent/deep_research_agent.py- Remove provider conditionalsagent/chat_mixin.py- Clean tool conversion logicDependencies
This issue is part of the larger architectural refactoring outlined in #1302. Provider adapter protocol eliminates feature-flag bloat and enables third-party provider extensions.