Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions src/praisonai-agents/praisonaiagents/agent/agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -487,6 +487,7 @@ def __init__(
approval: Optional[Union[bool, str, Dict[str, Any], 'ApprovalConfig', 'ApprovalProtocol']] = None,
tool_timeout: Optional[int] = None, # P8/G11: Timeout in seconds for each tool call
learn: Optional[Union[bool, str, Dict[str, Any], 'LearnConfig']] = None, # Continuous learning (peer to memory)
backend: Optional[Any] = None, # External managed agent backend (e.g., ManagedAgentIntegration)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

backend is only stored, never used.

These changes add a public backend option and document delegated execution, but none of the execution paths in this file consult self.backend. Agent(backend=managed) therefore still runs locally, so the feature is effectively unimplemented.

Also applies to: 578-582, 1807-1808

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/praisonai-agents/praisonaiagents/agent/agent.py` at line 490, The new
backend parameter is stored but never consulted; update the agent execution
paths to delegate to the external backend when present. Concretely, in the Agent
class use the stored self.backend in the main run/execute flow (e.g., Agent.run
and the agent execution entrypoints such as Agent.execute or internal methods
like Agent._execute_action / Agent._execute) to call into the backend (e.g.,
backend.execute or a documented delegate method) and return its result, falling
back to the current local implementation when self.backend is None; make the
delegation consistent wherever execution is performed (the places flagged around
the constructor and the other execution entrypoints referenced) and ensure
errors from the backend are propagated or translated the same way as local
execution.

):
"""Initialize an Agent instance.

Expand Down Expand Up @@ -574,6 +575,11 @@ def __init__(
- LearnConfig: Custom configuration
Learning is a first-class citizen, peer to memory. It captures patterns,
preferences, and insights from interactions to improve future responses.
backend: External managed agent backend for hybrid execution. Accepts:
- ManagedAgentIntegration: External managed agent service
- None: Use local execution (default)
When provided, agent can delegate execution to managed infrastructure
for long-running tasks or when local resources are constrained.

Raises:
ValueError: If all of name, role, goal, backstory, and instructions are None.
Expand Down Expand Up @@ -1798,6 +1804,9 @@ def __init__(
self._output_file = output_file if _output_config else None
self._output_template = output_template if _output_config else None

# Backend - external managed agent backend for hybrid execution
self.backend = backend
Comment on lines +1807 to +1808
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 backend is stored but never consulted during execution

The backend attribute is stored here but there is no code in the agent's start(), chat(), or any execution path that checks self.backend and delegates execution to it. As a result, even when a ManagedAgentIntegration is passed, the agent always executes locally. The usage example in the PR description (agent.start("Create a FastAPI app")) will silently ignore the backend.

The integration is incomplete until the execution path (e.g., chat() or start()) checks if self.backend is not None: return await self.backend.execute(prompt) (or similar) to delegate accordingly.

Comment on lines +1807 to +1808
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new backend parameter is stored on the Agent instance, but it is never referenced elsewhere in agent.py (search shows only this assignment). As-is, passing a managed backend won’t change execution behavior despite the docstring describing “hybrid execution”. Either wire self.backend into the execution/chat flow (delegating to the managed backend), or remove/soft-launch the parameter + docs until it has an effect.

Copilot uses AI. Check for mistakes.

# Telemetry - lazy initialized via property for performance
self.__telemetry = None
self.__telemetry_initialized = False
Expand Down
21 changes: 21 additions & 0 deletions src/praisonai-agents/praisonaiagents/agent/chat_mixin.py
Original file line number Diff line number Diff line change
Expand Up @@ -1049,6 +1049,27 @@ def chat(self, prompt: str, temperature: float = 1.0, tools: Optional[List[Any]]
'required' forces the LLM to call a tool before responding.
...other args...
"""
# Check if external managed backend is configured
if hasattr(self, 'backend') and self.backend is not None:
# Extract kwargs for delegation, excluding 'self' and function locals
delegation_kwargs = {
'temperature': temperature,
'tools': tools,
'output_json': output_json,
'output_pydantic': output_pydantic,
'reasoning_steps': reasoning_steps,
'stream': stream,
'task_name': task_name,
'task_description': task_description,
'task_id': task_id,
'config': config,
'force_retrieval': force_retrieval,
'skip_retrieval': skip_retrieval,
'attachments': attachments,
'tool_choice': tool_choice
}
return self._delegate_to_backend(prompt, **delegation_kwargs)
Comment on lines +1052 to +1071
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't bypass the normal chat() pipeline on backend calls.

This early return skips the rate limiter, BEFORE/AFTER_AGENT hooks, run tracking, guardrails, and agent_start/agent_end tracing in _chat_impl(). Backend-backed agents will therefore behave differently from local agents in ways callers already rely on. Please route delegation through the same wrapper logic, or emit the equivalent hooks/traces/validation around the backend call.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/praisonai-agents/praisonaiagents/agent/chat_mixin.py` around lines 1052 -
1071, The early return that calls self._delegate_to_backend(prompt,
**delegation_kwargs) bypasses the normal chat pipeline (rate limiter,
BEFORE/AFTER_AGENT hooks, run tracking, guardrails, and agent_start/agent_end
tracing) implemented in _chat_impl/chat(), so remove the early return and route
backend delegation through the same wrapper logic: either call the existing
_chat_impl (or the public chat() entry) and have it detect and invoke
_delegate_to_backend internally, or explicitly invoke the same pre/post steps
(rate limiter, BEFORE/AFTER_AGENT hooks, run tracking, guardrails,
agent_start/agent_end tracing) around a call to _delegate_to_backend; keep the
same delegation_kwargs and still pass prompt, and ensure any flags needed to
indicate backend delegation are added so upstream tracing/validation behave
identically for backend-backed agents.


# Emit context trace event (zero overhead when not set)
from ..trace.context_events import get_context_emitter
_trace_emitter = get_context_emitter()
Expand Down
170 changes: 170 additions & 0 deletions src/praisonai-agents/praisonaiagents/agent/execution_mixin.py
Original file line number Diff line number Diff line change
Expand Up @@ -251,6 +251,10 @@ def run(self, prompt: str, **kwargs: Any) -> Optional[str]:
- Background processing
- API endpoints
"""
# Check if external managed backend is configured
if hasattr(self, 'backend') and self.backend is not None:
return self._delegate_to_backend(prompt, **kwargs)

# Production defaults: no streaming, no display
if 'stream' not in kwargs:
kwargs['stream'] = False
Expand All @@ -274,6 +278,168 @@ def run(self, prompt: str, **kwargs: Any) -> Optional[str]:

return result

def _delegate_to_backend(self, prompt: str, **kwargs) -> Optional[str]:
"""Delegate execution to external managed backend (e.g., ManagedAgentIntegration)."""
import asyncio

# Check if backend has required methods
if not hasattr(self.backend, 'execute'):
raise RuntimeError(f"Backend {type(self.backend).__name__} does not support execute() method")

# Handle streaming vs non-streaming
stream_requested = kwargs.get('stream', False)

if stream_requested:
# For streaming, delegate to backend's stream method if available
if hasattr(self.backend, 'stream'):
return self._delegate_streaming_to_backend(prompt, **kwargs)
else:
# Fallback: execute non-streaming even if stream was requested
return self._execute_backend_sync(prompt, **kwargs)
else:
# Non-streaming execution
return self._execute_backend_sync(prompt, **kwargs)
Comment on lines +281 to +301
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add an async backend path for arun(), astart(), and achat().

This helper only serves the new synchronous delegation branches. Async callers still go through the local execution stack, so backend= currently works for run()/start()/chat() but not their async counterparts. Please add an async delegate and route the async public APIs through it too. Based on learnings, "Implement both sync and async entry points for user-facing APIs (run() for sync, start() for async); internal APIs should prefer async".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/praisonai-agents/praisonaiagents/agent/execution_mixin.py` around lines
281 - 301, The current _delegate_to_backend only handles synchronous delegation,
so async callers (arun, astart, achat) bypass backend; add an async counterpart
(e.g., _delegate_to_backend_async) that mirrors the sync logic but awaits
backend coroutine methods when present: check for async-capable methods on
self.backend (await backend.execute(...) or await backend.astream/... if
streaming), fall back to calling the sync helpers via asyncio.to_thread or
run_in_executor if only sync methods exist, and reuse
_delegate_streaming_to_backend/_execute_backend_sync where appropriate; then
update public async entrypoints arun(), astart(), and achat() to call and await
_delegate_to_backend_async(prompt, **kwargs) so backend= works for both sync and
async paths.


def _execute_backend_sync(self, prompt: str, **kwargs) -> Optional[str]:
"""Execute backend in sync mode, handling async backends."""
try:
# Try to run in existing event loop
loop = asyncio.get_running_loop()
# If we're already in an async context, we can't use asyncio.run()
# Create a new task instead
import concurrent.futures
import threading

def run_async():
new_loop = asyncio.new_event_loop()
asyncio.set_event_loop(new_loop)
try:
return new_loop.run_until_complete(self.backend.execute(prompt, **kwargs))
finally:
new_loop.close()

with concurrent.futures.ThreadPoolExecutor() as executor:
future = executor.submit(run_async)
return future.result()

except RuntimeError:
# No event loop running, safe to use asyncio.run()
return asyncio.run(self.backend.execute(prompt, **kwargs))
Comment on lines +305 to +327
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Narrow the RuntimeError fallback before issuing the backend call.

future.result() is inside the same try. If self.backend.execute() raises RuntimeError, this code treats it as "no running loop" and retries the request via asyncio.run(...), which can duplicate non-idempotent managed-backend operations like session/message creation.

🛠️ Safer structure
 def _execute_backend_sync(self, prompt: str, **kwargs) -> Optional[str]:
-        try:
-            # Try to run in existing event loop
-            loop = asyncio.get_running_loop()
-            # If we're already in an async context, we can't use asyncio.run()
-            # Create a new task instead
-            import concurrent.futures
-            import threading
+        try:
+            asyncio.get_running_loop()
+        except RuntimeError:
+            return asyncio.run(self.backend.execute(prompt, **kwargs))
+
+        import concurrent.futures
 
-            def run_async():
-                new_loop = asyncio.new_event_loop()
-                asyncio.set_event_loop(new_loop)
-                try:
-                    return new_loop.run_until_complete(self.backend.execute(prompt, **kwargs))
-                finally:
-                    new_loop.close()
-            
-            with concurrent.futures.ThreadPoolExecutor() as executor:
-                future = executor.submit(run_async)
-                return future.result()
-                
-        except RuntimeError:
-            # No event loop running, safe to use asyncio.run()
-            return asyncio.run(self.backend.execute(prompt, **kwargs))
+        def run_async():
+            new_loop = asyncio.new_event_loop()
+            asyncio.set_event_loop(new_loop)
+            try:
+                return new_loop.run_until_complete(self.backend.execute(prompt, **kwargs))
+            finally:
+                new_loop.close()
+
+        with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
+            future = executor.submit(run_async)
+            return future.result()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/praisonai-agents/praisonaiagents/agent/execution_mixin.py` around lines
305 - 327, The current try/except around asyncio.get_running_loop() also catches
RuntimeError thrown by self.backend.execute() (via future.result()) and
incorrectly retries the operation with asyncio.run; narrow the RuntimeError
handling so only the call to asyncio.get_running_loop() triggers the fallback.
Concretely, move the try/except to wrap only asyncio.get_running_loop() (or use
a separate check for an active loop), then run the
ThreadPoolExecutor/run_async/future.result() outside that except block so any
RuntimeError raised by self.backend.execute (called from
run_async/future.result()) is propagated instead of causing a duplicate
asyncio.run(self.backend.execute(...)) retry; keep references to run_async,
future.result(), asyncio.get_running_loop, asyncio.run, and self.backend.execute
when making the change.


def _delegate_streaming_to_backend(self, prompt: str, **kwargs):
"""Delegate to backend's streaming method."""
try:
# For streaming, we need to return an iterator/generator
# The backend's stream method is async, so we need to handle that
import asyncio

async def stream_wrapper():
async for chunk in self.backend.stream(prompt, **kwargs):
yield chunk

# Convert async generator to sync generator
def sync_stream():
try:
loop = asyncio.get_running_loop()
# Already in async context - need to handle differently
import concurrent.futures
import threading
import queue

result_queue = queue.Queue()
exception_holder = [None]

def run_in_thread():
new_loop = asyncio.new_event_loop()
asyncio.set_event_loop(new_loop)
try:
async def collect():
try:
async for item in self.backend.stream(prompt, **kwargs):
result_queue.put(('item', item))
result_queue.put(('done', None))
except Exception as e:
exception_holder[0] = e
result_queue.put(('error', e))

new_loop.run_until_complete(collect())
finally:
new_loop.close()

thread = threading.Thread(target=run_in_thread)
thread.start()

while True:
msg_type, data = result_queue.get()
if msg_type == 'item':
# For managed backends, we might get event objects
# Convert to string format expected by Agent
if isinstance(data, dict):
if data.get('type') == 'agent.message':
content = data.get('content', [])
if isinstance(content, list):
text_parts = []
for block in content:
if isinstance(block, dict) and block.get('type') == 'text':
text_parts.append(block.get('text', ''))
elif isinstance(block, str):
text_parts.append(block)
if text_parts:
yield ''.join(text_parts)
elif isinstance(content, str):
yield content
# Skip other event types (session.status_idle, etc.)
elif isinstance(data, str):
yield data
elif msg_type == 'done':
break
elif msg_type == 'error':
raise data

thread.join()

except RuntimeError:
# No event loop - can run directly
async def run_stream():
async for item in self.backend.stream(prompt, **kwargs):
yield item

# Use asyncio.run for each item (not ideal but works)
async_gen = run_stream()

async def collect_all():
results = []
async for item in async_gen:
results.append(item)
return results

results = asyncio.run(collect_all())
for item in results:
Comment on lines +401 to +417
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

The sync streaming fallback buffers the whole stream.

When no loop is running—which is the normal start(..., stream=True) case—collect_all() waits for the entire backend stream to finish before yielding the first chunk. That turns long-running managed sessions back into non-streaming responses. Please iterate the async generator incrementally on a background loop/thread instead of materializing results first.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/praisonai-agents/praisonaiagents/agent/execution_mixin.py` around lines
401 - 417, The current sync fallback creates collect_all() which buffers the
entire async generator (run_stream()/async_gen) via asyncio.run before yielding,
causing streaming to be lost; instead, spawn a background thread/loop that runs
the async generator from self.backend.stream(prompt, **kwargs) and pushes each
item into a thread-safe queue as they arrive, then have the synchronous caller
iterate by popping from that queue and yielding items incrementally; replace the
asyncio.run(collect_all()) + results loop with this producer-consumer pattern
(use the existing run_stream/async_gen concept but run it on a background event
loop and yield items from the queue without materializing all results).

# Similar conversion logic
if isinstance(item, dict):
if item.get('type') == 'agent.message':
content = item.get('content', [])
if isinstance(content, list):
text_parts = []
for block in content:
if isinstance(block, dict) and block.get('type') == 'text':
text_parts.append(block.get('text', ''))
elif isinstance(block, str):
text_parts.append(block)
if text_parts:
yield ''.join(text_parts)
elif isinstance(content, str):
yield content
elif isinstance(item, str):
yield item

return sync_stream()

except Exception as e:
# Fallback to non-streaming
logger.warning(f"Backend streaming failed, falling back to non-streaming: {e}")
return self._execute_backend_sync(prompt, **kwargs)

def _get_planning_agent(self):
"""Lazy load PlanningAgent for planning mode."""
if self._planning_agent is None and self.planning:
Expand Down Expand Up @@ -453,6 +619,10 @@ def start(self, prompt: Optional[str] = None, **kwargs: Any) -> Union[str, Gener
from praisonaiagents.utils.variables import substitute_variables
prompt = substitute_variables(prompt, {})

# Check if external managed backend is configured
if hasattr(self, 'backend') and self.backend is not None:
return self._delegate_to_backend(prompt, **kwargs)

# ─────────────────────────────────────────────────────────────────────
# UNIFIED AUTONOMY API: If autonomy is enabled, route to run_autonomous
# This allows: Agent(autonomy=True) + agent.start("Task") to just work!
Expand Down
16 changes: 12 additions & 4 deletions src/praisonai/praisonai/integrations/__init__.py
Original file line number Diff line number Diff line change
@@ -1,20 +1,24 @@
"""
PraisonAI Integrations - External CLI tool integrations.
PraisonAI Integrations - External CLI tool and managed agent integrations.

This module provides integrations with external AI coding CLI tools:
This module provides integrations with external AI coding tools:
- Claude Code CLI
- Gemini CLI
- OpenAI Codex CLI
- Cursor CLI
- Managed Agent Backends (Anthropic Managed Agents API)

All integrations use lazy loading to avoid performance impact.

Usage:
from praisonai.integrations import ClaudeCodeIntegration, GeminiCLIIntegration
from praisonai.integrations import ClaudeCodeIntegration, ManagedAgentIntegration

# Create integration
# CLI tool integration
claude = ClaudeCodeIntegration(workspace="/path/to/project")

# Managed agent integration
managed = ManagedAgentIntegration(provider="anthropic", api_key="...")

# Use as agent tool
tool = claude.as_tool()

Expand All @@ -29,6 +33,7 @@
'GeminiCLIIntegration',
'CodexCLIIntegration',
'CursorCLIIntegration',
'ManagedAgentIntegration',
'get_available_integrations',
]

Expand All @@ -50,6 +55,9 @@ def __getattr__(name):
elif name == 'CursorCLIIntegration':
from .cursor_cli import CursorCLIIntegration
return CursorCLIIntegration
elif name == 'ManagedAgentIntegration':
from .managed_agents import ManagedAgentIntegration
return ManagedAgentIntegration
elif name == 'get_available_integrations':
from .base import get_available_integrations
return get_available_integrations
Expand Down
Loading