-
Notifications
You must be signed in to change notification settings - Fork 0
fix al workflow #26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix al workflow #26
Changes from 3 commits
3ac24a4
7c56ed7
e869bed
331555c
228c23b
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -5,16 +5,13 @@ | |||||
| from app.agents.nodes import ( | ||||||
| agent_host, | ||||||
| context_builder, | ||||||
| fallback_final, | ||||||
| fallback_inicial, | ||||||
| generator, | ||||||
| fallback, | ||||||
| guard, | ||||||
| parafraseo, | ||||||
| retriever, | ||||||
| ) | ||||||
| from app.agents.routing import route_after_fallback_final, route_after_guard | ||||||
| from app.agents.state import AgentState | ||||||
|
|
||||||
| from app.agents.routing import route_after_guard | ||||||
|
|
||||||
| def create_agent_graph() -> StateGraph: | ||||||
| """ | ||||||
|
|
@@ -23,13 +20,13 @@ def create_agent_graph() -> StateGraph: | |||||
| The graph implements the following flow: | ||||||
| 1. START -> agent_host (Nodo 1) | ||||||
| 2. agent_host -> guard (Nodo 2) | ||||||
| 3. guard -> [conditional] -> fallback_inicial (Nodo 3) or END | ||||||
| 4. fallback_inicial -> parafraseo (Nodo 4) | ||||||
| 3. guard -> [conditional] -> fallback (Nodo 3) or END | ||||||
| 4. fallback -> parafraseo (Nodo 4) | ||||||
| 5. parafraseo -> retriever (Nodo 5) | ||||||
| 6. retriever -> context_builder (Nodo 6) | ||||||
| 7. context_builder -> generator (Nodo 7) | ||||||
| 8. generator -> fallback_final (Nodo 8) | ||||||
| 9. fallback_final -> [conditional] -> END (with final_response) or END (with error) | ||||||
| 8. generator -> fallback (Nodo 8) | ||||||
| 9. fallback -> [conditional] -> END (with final_response) or END (with error) | ||||||
|
Comment on lines
23
to
+29
|
||||||
|
|
||||||
| Returns: | ||||||
| Configured StateGraph instance ready for execution | ||||||
|
|
@@ -40,12 +37,10 @@ def create_agent_graph() -> StateGraph: | |||||
| # Add nodes | ||||||
| workflow.add_node("agent_host", agent_host) | ||||||
| workflow.add_node("guard", guard) | ||||||
| workflow.add_node("fallback_inicial", fallback_inicial) | ||||||
| workflow.add_node("fallback", fallback) | ||||||
| workflow.add_node("parafraseo", parafraseo) | ||||||
| workflow.add_node("retriever", retriever) | ||||||
| workflow.add_node("context_builder", context_builder) | ||||||
| workflow.add_node("generator", generator) | ||||||
| workflow.add_node("fallback_final", fallback_final) | ||||||
|
|
||||||
| # Define edges | ||||||
| # Start -> agent_host | ||||||
|
|
@@ -59,37 +54,29 @@ def create_agent_graph() -> StateGraph: | |||||
| "guard", | ||||||
| route_after_guard, | ||||||
| { | ||||||
| "malicious": END, # End with error if malicious | ||||||
| "continue": "fallback_inicial", # Continue to fallback_inicial if valid | ||||||
| "malicious": "fallback", # go to fallback if malicious | ||||||
| "continue": "parafraseo", # Continue to parafraseo if valid | ||||||
| }, | ||||||
| ) | ||||||
|
|
||||||
| # fallback_inicial -> parafraseo | ||||||
| workflow.add_edge("fallback_inicial", "parafraseo") | ||||||
|
|
||||||
| # parafraseo -> retriever | ||||||
| workflow.add_edge("parafraseo", "retriever") | ||||||
|
|
||||||
| # retriever -> context_builder | ||||||
| workflow.add_edge("retriever", "context_builder") | ||||||
|
|
||||||
| # context_builder -> generator | ||||||
| # Note: Primary LLM is called within context_builder node | ||||||
| workflow.add_edge("context_builder", "generator") | ||||||
| # context_builder -> guard | ||||||
| workflow.add_edge("context_builder", "guard") | ||||||
|
|
||||||
| # generator -> fallback_final | ||||||
| workflow.add_edge("generator", "fallback_final") | ||||||
|
|
||||||
| # fallback_final -> conditional routing | ||||||
| # guard -> conditional routing | ||||||
| workflow.add_conditional_edges( | ||||||
| "fallback_final", | ||||||
| route_after_fallback_final, | ||||||
| "guard", | ||||||
| route_after_guard, | ||||||
| { | ||||||
| "risky": END, # End with error if risky | ||||||
| "continue": END, # End with final_response if valid | ||||||
| # Note: Final LLM is called within fallback_final node | ||||||
| "malicious": "fallback", # go to fallback if malicious | ||||||
| "continue": END, # if there's no error ends | ||||||
|
||||||
| "continue": END, # if there's no error ends | |
| "continue": END, # End if no error is detected |
Copilot
AI
Dec 13, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a duplicate conditional edge definition for the "guard" node. Lines 58-65 already define conditional edges for "guard" with route_after_guard. In LangGraph, adding a second conditional edge to the same node will overwrite the first one, meaning the routing defined at lines 58-65 will be ignored and only this second definition will be active. This creates a logical error where the workflow cannot reach the parafraseo node at all, since the first guard check is effectively removed.
Copilot
AI
Dec 14, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The graph has two conditional edges from the 'guard' node (lines 53-60 and 72-79), but the workflow uses the same guard node for both inicial and final validation. This creates conflicting routing logic. The second guard should be a separate node called 'guard_final' to validate the generated response for PII, as indicated by the guard_final.py file that was created but never integrated into the graph.
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -2,20 +2,16 @@ | |
|
|
||
| from app.agents.nodes.agent_host import agent_host | ||
| from app.agents.nodes.context_builder import context_builder | ||
| from app.agents.nodes.fallback_final import fallback_final | ||
| from app.agents.nodes.fallback_inicial import fallback_inicial | ||
| from app.agents.nodes.generator import generator | ||
| from app.agents.nodes.fallback import fallback | ||
| from app.agents.nodes.guard import guard | ||
|
||
| from app.agents.nodes.parafraseo import parafraseo | ||
| from app.agents.nodes.retriever import retriever | ||
|
|
||
| __all__ = [ | ||
| "agent_host", | ||
| "guard", | ||
| "fallback_inicial", | ||
| "fallback", | ||
| "parafraseo", | ||
| "retriever", | ||
| "context_builder", | ||
| "generator", | ||
| "fallback_final", | ||
| ] | ||
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -26,6 +26,9 @@ def agent_host(state: AgentState) -> AgentState: | |||||
|
|
||||||
| # Placeholder: For now, we'll just store the prompt as initial context | ||||||
| updated_state = state.copy() | ||||||
| updated_state["initial_context"] = state.get("prompt", "") | ||||||
| initial_message = state["messages"][-1] | ||||||
|
||||||
| initial_message = state["messages"][-1] | |
| initial_message = state["messages"][-1] if state["messages"] else None |
Copilot
AI
Dec 14, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function returns updated_state which is a copy of the entire state. However, most other nodes in the workflow return partial state updates (just dict with changed fields). For consistency with MessagesState patterns and other nodes like context_builder and fallback, this should return a dict with just the updated fields: {'initial_context': initial_message.content if initial_message else ''}.
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
| @@ -1,6 +1,10 @@ | ||||||
| """Nodo 6: Context Builder - Enriches query with retrieved context.""" | ||||||
|
|
||||||
| from app.agents.state import AgentState | ||||||
| from langchain_core.messages import SystemMessage | ||||||
| from langchain_openai import ChatOpenAI | ||||||
|
|
||||||
| llm = ChatOpenAI(model="gpt-5-nano") | ||||||
|
NicoWagner2005 marked this conversation as resolved.
|
||||||
|
|
||||||
|
|
||||||
| def context_builder(state: AgentState) -> AgentState: | ||||||
|
NicoWagner2005 marked this conversation as resolved.
|
||||||
|
|
@@ -31,13 +35,18 @@ def context_builder(state: AgentState) -> AgentState: | |||||
| paraphrased = state.get("paraphrased_text", "") | ||||||
| chunks = state.get("relevant_chunks", []) | ||||||
|
|
||||||
| # Build enriched query | ||||||
| context_section = "\n\n".join(chunks) if chunks else "" | ||||||
| enriched_query = f"{paraphrased}\n\nContext:\n{context_section}" if context_section else paraphrased | ||||||
| updated_state["enriched_query"] = enriched_query | ||||||
| # Build enriched query with context | ||||||
| context_section = "\n\n".join(chunks) if chunks else "No relevant context found." | ||||||
|
|
||||||
| system_content = f"""You are a helpful assistant. Use the following context to answer the user's question. | ||||||
| If the answer is not in the context, say you don't know. | ||||||
|
|
||||||
| Context: | ||||||
| {context_section}""" | ||||||
|
|
||||||
| messages = [SystemMessage(content=system_content)] + state["messages"] | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Use defensive access for Direct dictionary access will raise - messages = [SystemMessage(content=system_content)] + state["messages"]
+ messages = [SystemMessage(content=system_content)] + state.get("messages", [])📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents
Comment on lines
31
to
+47
|
||||||
|
|
||||||
| # TODO: Call Primary LLM here | ||||||
| # updated_state["primary_response"] = call_primary_llm(enriched_query) | ||||||
| updated_state["primary_response"] = None | ||||||
| # Call Primary LLM | ||||||
| response = llm.invoke(messages) | ||||||
|
|
||||||
| return updated_state | ||||||
| return {"messages": [response]} | ||||||
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,42 @@ | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| """Nodo 3: Fallback Inicial - Initial fallback processing.""" | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| """Nodo 3: Fallback Inicial - Initial fallback processing.""" | |
| """Nodo 3: Fallback General - Handles fallback processing for both initial and final validation failures.""" |
Copilot
AI
Dec 14, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The module docstring incorrectly states "Nodo 3: Fallback Inicial" when this node is now a unified fallback that can be called from different points in the workflow (both after initial guard and after context_builder). The docstring should be updated to reflect the new unified purpose.
| """Nodo 3: Fallback Inicial - Initial fallback processing.""" | |
| """Unified fallback node - Handles fallback processing from multiple workflow points. | |
| This node can be invoked after the initial guard or after the context builder, | |
| providing a consistent fallback mechanism across the workflow. | |
| """ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Invalid model name gpt-5-nano.
Same issue as in parafraseo.py - this model doesn't exist and will cause a runtime error.
llm = ChatOpenAI(
- model="gpt-5-nano",
+ model="gpt-4o-mini",
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| llm = ChatOpenAI( | |
| model="gpt-5-nano", | |
| ) | |
| llm = ChatOpenAI( | |
| model="gpt-4o-mini", | |
| ) |
🤖 Prompt for AI Agents
RAGManager/app/agents/nodes/fallback.py around lines 11-13: the ChatOpenAI
instantiation uses an invalid model name "gpt-5-nano" which will raise at
runtime; change it to a valid model (e.g., "gpt-4o-mini" or your project’s
configured default/ENV model variable) and make it consistent with parafraseo.py
(use the same valid model or centralize the model name into a config/env var and
reference that here).
Copilot
AI
Dec 14, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The LLM is initialized at module level, which means it will be created immediately when the module is imported, even if the fallback function is never called. This could waste resources during startup or testing. Consider using lazy initialization with a function like _get_llm() similar to the pattern used in guard_inicial.py and guard_final.py.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Track the TODO for class-based refactor.
This comment indicates planned work to refactor into a class with proper initialization.
Would you like me to open an issue to track this refactor, or help implement a class-based node structure?
🤖 Prompt for AI Agents
RAGManager/app/agents/nodes/fallback.py around line 15: there's a TODO noting to
implement a fallback node as a class and initialize the LLM in __init__; replace
the inline/TODO approach by creating a FallbackNode class that accepts
configuration (e.g., llm settings, logger) in its constructor, initialize the
LLM client/adapter in __init__ (handling errors and retries), expose a clear
public method (e.g., handle(input) or run(context)) to perform fallback logic,
and update any call sites to instantiate and use this class instead of
procedural code; include docstring and simple unit tests to validate
initialization and fallback behavior.
Copilot
AI
Dec 14, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comment contains "TO DO" which should be written as "TODO" (one word, no space) for consistency with standard TODO comment conventions.
| # TO DO: implementar clase nodo fallback y inicializar el llm en el init | |
| # TODO: implementar clase nodo fallback y inicializar el llm en el init |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Docstring and return value inconsistency.
The docstring states it returns error_message, but the function returns {"messages": [error_message]}. Additionally, the AgentState schema has an error_message field that isn't being set. Consider updating the return to set error_message for downstream nodes that may rely on it.
error_message = llm.invoke(messages)
- return {"messages": [error_message]}
+ return {"messages": [error_message], "error_message": error_message.content}Also applies to: 41-41
🤖 Prompt for AI Agents
In RAGManager/app/agents/nodes/fallback.py around lines 27-28 (and also at line
41), the docstring claims the function returns error_message but the function
actually returns {"messages": [error_message]} and doesn't set
AgentState.error_message; update the function to (1) set the
AgentState.error_message field to the error string before returning, (2) return
a structure that matches the docstring (or update the docstring to reflect
returning both error_message and messages), and (3) ensure downstream callers
expect AgentState.error_message — i.e., assign state.error_message =
error_message and return either {"error_message": error_message, "messages":
[error_message]} or change the docstring to match the current return shape
consistently at both locations.
Copilot
AI
Dec 14, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The docstring states the function returns "error_message", but the actual return statement returns a dictionary with a "messages" key containing the LLM response. The docstring should accurately describe the return value format to match the implementation.
| 2. Generates an error_message from llm to show the user | |
| Args: | |
| state: Agent state containing the prompt or initial context | |
| Returns: | |
| error_message | |
| 2. Generates an error message from llm to show the user | |
| Args: | |
| state: Agent state containing the prompt or initial context | |
| Returns: | |
| dict: A dictionary with a "messages" key containing a list with the generated error message from the LLM. |
Copilot
AI
Dec 14, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The message says 'Defensive check triggered' but this is now a fallback handler, not a defensive check. Also, since this node handles both malicious prompts and risky responses, the message should be more generic.
| "Defensive check triggered: Malicious prompt detected" | |
| "Fallback handler triggered: Malicious prompt or risky response detected." |
Copilot
AI
Dec 14, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error message prompt says 'explaining the database doesn't have the information' which is inaccurate. This fallback is triggered by malicious/risky content detection, not by missing information in the database. The prompt should instruct the LLM to generate a message explaining that the request cannot be processed due to security or safety concerns.
| content="Your job is to generate an error message in user's language for the user explaining the database doesn't have the information to respond what the user asked" | |
| content="Your job is to generate an error message in the user's language for the user, explaining that their request cannot be processed due to security or safety concerns." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use defensive access for state["messages"].
Direct dictionary access state["messages"] will raise KeyError if the key is missing. Use .get() with a default for consistency with other state access patterns.
messages = [
SystemMessage(content=system_message_content)
- ] + state["messages"]
+ ] + state.get("messages", [])📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| messages = [ | |
| SystemMessage(content=system_message_content) | |
| ] + state["messages"] | |
| messages = [ | |
| SystemMessage(content=system_message_content) | |
| ] + state.get("messages", []) |
🤖 Prompt for AI Agents
In RAGManager/app/agents/nodes/fallback.py around lines 70-72, the code accesses
state["messages"] directly which can raise KeyError; replace that direct access
with defensive access using state.get("messages", []) (or wrap with list(...) /
ensure it's a list) so the messages concatenation becomes stable and consistent
with other state access patterns.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
System prompt is misleading for malicious input handling.
The prompt instructs the LLM to say "the database doesn't have the information," but this node handles malicious prompts. This misleads users about why their request failed. Consider a prompt that politely declines without revealing detection logic.
messages = [
SystemMessage(
- content="Your job is to generate an error message in user's language for the user explaining the database doesn't have the information to respond what the user asked"
+ content="Your job is to generate a polite error message in the user's language explaining that you cannot process this request. Do not reveal that it was flagged as malicious."
)
] + state["messages"]🤖 Prompt for AI Agents
In RAGManager/app/agents/nodes/fallback.py around lines 35-40, the SystemMessage
currently instructs the LLM to claim "the database doesn't have the information"
which is misleading for malicious or disallowed inputs; change the system prompt
to instead instruct the model to politely decline disallowed requests without
revealing detection logic (e.g., apologize, state it cannot assist with that
request, offer safe alternatives or resources), ensure the refusal is phrased in
the user's language, and keep the rest of the message flow intact so the node
returns a polite, non-revealing decline for malicious prompts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add error handling for LLM invocation in fallback node.
The llm.invoke() call has no error handling. Since this is the fallback node (a last-resort error handler), if the LLM call fails due to network issues, API errors, or the invalid model name, the entire fallback will crash. Consider wrapping the invocation in try/except and returning a hardcoded fallback message on failure.
messages = [
SystemMessage(content=system_message_content)
] + state["messages"]
- error_message = llm.invoke(messages)
+ try:
+ error_message = llm.invoke(messages)
+ except Exception as e:
+ logger.error(f"LLM invocation failed in fallback node: {e}")
+ from langchain_core.messages import AIMessage
+ error_message = AIMessage(
+ content="I'm sorry, but I cannot process your request at this time. Please try again later."
+ )
+
return {"messages": [error_message]}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| messages = [ | |
| SystemMessage(content=system_message_content) | |
| ] + state["messages"] | |
| error_message = llm.invoke(messages) | |
| return {"messages": [error_message]} | |
| messages = [ | |
| SystemMessage(content=system_message_content) | |
| ] + state["messages"] | |
| try: | |
| error_message = llm.invoke(messages) | |
| except Exception as e: | |
| logger.error(f"LLM invocation failed in fallback node: {e}") | |
| from langchain_core.messages import AIMessage | |
| error_message = AIMessage( | |
| content="I'm sorry, but I cannot process your request at this time. Please try again later." | |
| ) | |
| return {"messages": [error_message]} |
🤖 Prompt for AI Agents
In RAGManager/app/agents/nodes/fallback.py around lines 62-67, the
llm.invoke(messages) call is unprotected and can raise (network/API/model)
errors causing the fallback node to crash; wrap the llm.invoke call in a
try/except that catches Exception, log the exception (or attach a minimal safe
error string), and return a deterministic fallback response (e.g. a single
assistant/SystemMessage with a hardcoded apology/temporary-fallback text) in the
same {"messages": [...]} shape so the pipeline continues even if the LLM call
fails.
Copilot
AI
Dec 14, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The TODO comment suggests implementing a class for the fallback node and initializing the LLM in init. This is a valid concern since the current module-level LLM initialization is inconsistent with the lazy initialization pattern used in guard_inicial.py and guard_final.py. Consider addressing this TODO before merging, as it impacts the maintainability and consistency of the codebase.
| llm = ChatOpenAI( | |
| model="gpt-5-nano", | |
| ) | |
| # TO DO: implementar clase nodo fallback y inicializar el llm en el init | |
| def fallback(state: AgentState) -> AgentState: | |
| """ | |
| Fallback node - Performs fallback processing. | |
| This node: | |
| 1. Alerts about malicious prompt or PII detection | |
| 2. Generates an error_message from llm to show the user | |
| Args: | |
| state: Agent state containing the prompt or initial context | |
| Returns: | |
| error_message | |
| """ | |
| # Check for PII/Risky content (from guard_final) | |
| if state.get("is_risky"): | |
| logger.warning( | |
| "Defensive check triggered: PII/Risky content detected in response" | |
| ) | |
| system_message_content = ( | |
| "Your job is to generate an error message in user's language explaining " | |
| "that the response cannot be provided because it contains sensitive or private information." | |
| ) | |
| # Check for Malicious prompt (from guard_inicial) | |
| elif state.get("is_malicious"): | |
| logger.warning( | |
| "Defensive check triggered: Malicious prompt detected" | |
| ) | |
| system_message_content = ( | |
| "Your job is to generate an error message in user's language for the user " | |
| "explaining the database doesn't have the information to answer the user's question" | |
| ) | |
| # Generic Fallback (neither risky nor malicious) | |
| else: | |
| logger.info( | |
| "Fallback triggered: Generic fallback (no risky/malicious flag)" | |
| ) | |
| system_message_content = ( | |
| "Your job is to generate an error message in user's language for the user " | |
| "explaining the database doesn't have the information to answer the user's question" | |
| ) | |
| messages = [ | |
| SystemMessage(content=system_message_content) | |
| ] + state["messages"] | |
| error_message = llm.invoke(messages) | |
| return {"messages": [error_message]} | |
| class FallbackNode: | |
| """ | |
| Fallback node - Performs fallback processing. | |
| This node: | |
| 1. Alerts about malicious prompt or PII detection | |
| 2. Generates an error_message from llm to show the user | |
| """ | |
| def __init__(self): | |
| self.llm = ChatOpenAI( | |
| model="gpt-5-nano", | |
| ) | |
| def __call__(self, state: AgentState) -> AgentState: | |
| """ | |
| Args: | |
| state: Agent state containing the prompt or initial context | |
| Returns: | |
| error_message | |
| """ | |
| # Check for PII/Risky content (from guard_final) | |
| if state.get("is_risky"): | |
| logger.warning( | |
| "Defensive check triggered: PII/Risky content detected in response" | |
| ) | |
| system_message_content = ( | |
| "Your job is to generate an error message in user's language explaining " | |
| "that the response cannot be provided because it contains sensitive or private information." | |
| ) | |
| # Check for Malicious prompt (from guard_inicial) | |
| elif state.get("is_malicious"): | |
| logger.warning( | |
| "Defensive check triggered: Malicious prompt detected" | |
| ) | |
| system_message_content = ( | |
| "Your job is to generate an error message in user's language for the user " | |
| "explaining the database doesn't have the information to answer the user's question" | |
| ) | |
| # Generic Fallback (neither risky nor malicious) | |
| else: | |
| logger.info( | |
| "Fallback triggered: Generic fallback (no risky/malicious flag)" | |
| ) | |
| system_message_content = ( | |
| "Your job is to generate an error message in user's language for the user " | |
| "explaining the database doesn't have the information to answer the user's question" | |
| ) | |
| messages = [ | |
| SystemMessage(content=system_message_content) | |
| ] + state["messages"] | |
| error_message = self.llm.invoke(messages) | |
| return {"messages": [error_message]} |
This file was deleted.
This file was deleted.
This file was deleted.
| Original file line number | Diff line number | Diff line change | ||||||
|---|---|---|---|---|---|---|---|---|
|
|
@@ -37,7 +37,8 @@ def guard(state: AgentState) -> AgentState: | |||||||
| Updated state with is_malicious and error_message set | ||||||||
| """ | ||||||||
| updated_state = state.copy() | ||||||||
| prompt = state.get("prompt", "") | ||||||||
| last_message = state["messages"][-1] | ||||||||
|
||||||||
| last_message = state["messages"][-1] | |
| messages = state.get("messages") | |
| last_message = messages[-1] if isinstance(messages, list) and messages else None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Good shift to messages-based prompt + length-only logging; harden messages type and exception logging.
If state["messages"] can be non-list/None, treat it as empty. Also log exceptions with stack trace for ops.
updated_state = state.copy()
- messages = state.get("messages", [])
+ messages = state.get("messages")
+ if not isinstance(messages, list):
+ messages = []
last_message = messages[-1] if messages else None
prompt = last_message.content if last_message else ""
@@
- logger.error(f"Error during jailbreak detection: {e}")
+ logger.exception("Error during jailbreak detection")Also applies to: 67-75
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
| @@ -1,6 +1,10 @@ | ||||||
| """Nodo 4: Parafraseo - Paraphrases user input.""" | ||||||
|
|
||||||
| from app.agents.state import AgentState | ||||||
| from langchain_core.messages import SystemMessage | ||||||
| from langchain_openai import ChatOpenAI | ||||||
|
|
||||||
| llm = ChatOpenAI(model="gpt-5-nano") | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Invalid model name
-llm = ChatOpenAI(model="gpt-5-nano")
+llm = ChatOpenAI(model="gpt-4o-mini")📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents
|
||||||
|
|
||||||
|
|
||||||
| def parafraseo(state: AgentState) -> AgentState: | ||||||
|
|
@@ -24,9 +28,16 @@ def parafraseo(state: AgentState) -> AgentState: | |||||
| # 2. Improve clarity, adjust tone, or format as needed | ||||||
| # 3. Set paraphrased_text with the result | ||||||
|
|
||||||
| # Placeholder: For now, we'll use the adjusted_text as-is | ||||||
| updated_state = state.copy() | ||||||
| text_to_paraphrase = state.get("adjusted_text") or state.get("prompt", "") | ||||||
| updated_state["paraphrased_text"] = text_to_paraphrase | ||||||
| # Paraphrase the last message using history | ||||||
|
|
||||||
| system_instruction = """You are an expert at paraphrasing user questions to be standalone and clear, given the conversation history. | ||||||
| Reformulate the last user message to be a self-contained query that includes necessary context from previous messages. | ||||||
| Do not answer the question, just rewrite it.""" | ||||||
|
|
||||||
| messages = [SystemMessage(content=system_instruction)] + state["messages"] | ||||||
|
|
||||||
| response = llm.invoke(messages) | ||||||
| updated_state = state.copy() # Create a copy of the state to update | ||||||
| updated_state["paraphrased_text"] = response.content | ||||||
|
Comment on lines
+39
to
+41
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧹 Nitpick | 🔵 Trivial Add error handling for LLM invocation. If 🤖 Prompt for AI Agents |
||||||
|
|
||||||
| return updated_state | ||||||
|
||||||
| return updated_state | |
| return {"paraphrased_text": response.content} |
Uh oh!
There was an error while loading. Please reload this page.