Description
When using an LLMGuardrail as the guardrail for a task, the workflow fails to execute as expected. Instead of validating the output, the process repeatedly throws the error 'function' object is not iterable and enters an infinite retry loop. This happens even when following the official documentation and using the correct parameters. As a result, the task never completes and no meaningful validation or feedback is provided.
This issue blocks the use of LLM-based guardrails for output validation and quality assurance.
Environment
- Provider (select one):
- PraisonAI version:
- Operating System:
Full Code
from praisonaiagents import Agent, Task, LLMGuardrail, PraisonAIAgents
import trafilatura
guardrail = LLMGuardrail(
description="""
Check if the output:
1. is in bullet points
2. is in english
3. clearly states the main points
4. has the name of the author
5. has the title of the article
""",
llm="gemini/gemini-2.5-flash-lite-preview-06-17"
)
def get_url_context(url):
downloaded = trafilatura.fetch_url(url)
if not downloaded:
return "Sorry, I couldn't fetch the content from that URL."
extracted = trafilatura.extract(
downloaded,
include_comments=False,
include_links=True,
output_format='json',
with_metadata=True,
url=url
)
if not extracted:
return "Sorry, I couldn't extract readable content from that page."
return extracted # returns JSON string with 'text', 'title', 'author', 'date', etc.
agent = Agent(
instructions="You are a helpful assistant",
llm="gemini/gemini-2.5-flash-lite-preview-06-17",
self_reflect=False,
verbose=True,
tools=get_url_context
)
task = Task(
name="summarise article",
description="get the context of this url: https://blog.google/technology/ai/dolphingemma/ and produce a summary below 500 characters",
agent=agent,
guardrail=guardrail,
expected_output="summary of the article below 500 characters"
)
agents = PraisonAIAgents(
agents=[agent],
tasks=[task]
)
agents.start()
Steps to Reproduce
1.insatll the library
2. copy the script
3. run the script
Expected Behavior
Using an LLMGuardrail as a task guardrail should validate the output according to the provided criteria, as described in the documentation.
The workflow should not throw a 'function' object is not iterable error.
The process should complete, either accepting or rejecting the output based on the guardrail, and provide a clear, actionable error message if validation fails.
Actual Behavior
When using an LLMGuardrail as the guardrail parameter in a Task, running the workflow results in repeated errors:
"Error in get_response: 'function' object is not iterable"
"Error in LLM chat: 'function' object is not iterable"
The process enters an infinite retry loop and never completes the task.
This occurs even when following the official documentation for LLM guardrails.
Additional Context
[01:57:40] INFO [01:57:40] llm.py:593 INFO Getting response from llm.py:593
gemini/gemini-2.5-flash-lite-preview-06-17
╭───────────────────────── Error ──────────────────────────╮
│ Error in get_response: 'function' object is not iterable │
╰──────────────────────────────────────────────────────────╯
╭─────────────────────── Error ────────────────────────╮
│ Error in LLM chat: 'function' object is not iterable │
╰──────────────────────────────────────────────────────╯
[01:57:41] INFO [01:57:41] llm.py:593 INFO Getting response from llm.py:593
gemini/gemini-2.5-flash-lite-preview-06-17
╭───────────────────────── Error ──────────────────────────╮
│ Error in get_response: 'function' object is not iterable │
╰──────────────────────────────────────────────────────────╯
╭─────────────────────── Error ────────────────────────╮
│ Error in LLM chat: 'function' object is not iterable │
╰──────────────────────────────────────────────────────╯
[01:57:42] INFO [01:57:42] llm.py:593 INFO Getting response from llm.py:593
gemini/gemini-2.5-flash-lite-preview-06-17
╭───────────────────────── Error ──────────────────────────╮
│ Error in get_response: 'function' object is not iterable │
╰──────────────────────────────────────────────────────────╯
╭─────────────────────── Error ────────────────────────╮
│ Error in LLM chat: 'function' object is not iterable │
╰──────────────────────────────────────────────────────╯
^CTraceback (most recent call last):
Description
When using an LLMGuardrail as the guardrail for a task, the workflow fails to execute as expected. Instead of validating the output, the process repeatedly throws the error 'function' object is not iterable and enters an infinite retry loop. This happens even when following the official documentation and using the correct parameters. As a result, the task never completes and no meaningful validation or feedback is provided.
This issue blocks the use of LLM-based guardrails for output validation and quality assurance.
Environment
Full Code
Steps to Reproduce
1.insatll the library
2. copy the script
3. run the script
Expected Behavior
Using an LLMGuardrail as a task guardrail should validate the output according to the provided criteria, as described in the documentation.
The workflow should not throw a 'function' object is not iterable error.
The process should complete, either accepting or rejecting the output based on the guardrail, and provide a clear, actionable error message if validation fails.
Actual Behavior
When using an LLMGuardrail as the guardrail parameter in a Task, running the workflow results in repeated errors:
"Error in get_response: 'function' object is not iterable"
"Error in LLM chat: 'function' object is not iterable"
The process enters an infinite retry loop and never completes the task.
This occurs even when following the official documentation for LLM guardrails.
Additional Context