-
-
Notifications
You must be signed in to change notification settings - Fork 23.5k
Description
Describe the bug
Description When using the Multi-Agent / Supervisor node setup, sending a simple "Hello" results in the Supervisor calling the agents repeatedly in a loop (hitting the Max Iterations limit, e.g., 5 times). Even though the first response is sufficient, the Flowise worker continues to execute, consuming excessive API credits.
To Reproduce
-
Set up a Supervisor node with at least one worker agent (e.g., SQL Agent).
-
Input a prompt like "Hello".
-
The Supervisor correctly identifies the intent but fails to signal a FINISH or terminal state.
-
The flow logs show the agent being called N times consecutively before the process stops.
Expected behavior
Expected Behavior The Supervisor should recognize that a Greeting does not require tool usage or further agent collaboration and should return a Final Answer immediately to end the run.
Actual Behavior The flow enters a recursion loop. The Supervisor seems to think the task is incomplete because no "tool" was called, or it fails to output the correct stop sequence required by the Flowise multi-agent framework to terminate the loop.
Screenshots
Flow
{
"nodes": [
{
"id": "startAgentflow_0",
"type": "agentFlow",
"position": {
"x": -49,
"y": 103
},
"data": {
"id": "startAgentflow_0",
"label": "Start",
"version": 1.1,
"name": "startAgentflow",
"type": "Start",
"color": "#7EE787",
"hideInput": true,
"baseClasses": [
"Start"
],
"category": "Agent Flows",
"description": "Starting point of the agentflow",
"inputParams": [
{
"label": "Input Type",
"name": "startInputType",
"type": "options",
"options": [
{
"label": "Chat Input",
"name": "chatInput",
"description": "Start the conversation with chat input"
},
{
"label": "Form Input",
"name": "formInput",
"description": "Start the workflow with form inputs"
}
],
"default": "chatInput",
"id": "startAgentflow_0-input-startInputType-options",
"display": true
},
{
"label": "Form Title",
"name": "formTitle",
"type": "string",
"placeholder": "Please Fill Out The Form",
"show": {
"startInputType": "formInput"
},
"id": "startAgentflow_0-input-formTitle-string",
"display": false
},
{
"label": "Form Description",
"name": "formDescription",
"type": "string",
"placeholder": "Complete all fields below to continue",
"show": {
"startInputType": "formInput"
},
"id": "startAgentflow_0-input-formDescription-string",
"display": false
},
{
"label": "Form Input Types",
"name": "formInputTypes",
"description": "Specify the type of form input",
"type": "array",
"show": {
"startInputType": "formInput"
},
"array": [
{
"label": "Type",
"name": "type",
"type": "options",
"options": [
{
"label": "String",
"name": "string"
},
{
"label": "Number",
"name": "number"
},
{
"label": "Boolean",
"name": "boolean"
},
{
"label": "Options",
"name": "options"
}
],
"default": "string"
},
{
"label": "Label",
"name": "label",
"type": "string",
"placeholder": "Label for the input"
},
{
"label": "Variable Name",
"name": "name",
"type": "string",
"placeholder": "Variable name for the input (must be camel case)",
"description": "Variable name must be camel case. For example: firstName, lastName, etc."
},
{
"label": "Add Options",
"name": "addOptions",
"type": "array",
"show": {
"formInputTypes[$index].type": "options"
},
"array": [
{
"label": "Option",
"name": "option",
"type": "string"
}
]
}
],
"id": "startAgentflow_0-input-formInputTypes-array",
"display": false
},
{
"label": "Ephemeral Memory",
"name": "startEphemeralMemory",
"type": "boolean",
"description": "Start fresh for every execution without past chat history",
"optional": true,
"id": "startAgentflow_0-input-startEphemeralMemory-boolean",
"display": true
},
{
"label": "Flow State",
"name": "startState",
"description": "Runtime state during the execution of the workflow",
"type": "array",
"optional": true,
"array": [
{
"label": "Key",
"name": "key",
"type": "string",
"placeholder": "Foo"
},
{
"label": "Value",
"name": "value",
"type": "string",
"placeholder": "Bar",
"optional": true
}
],
"id": "startAgentflow_0-input-startState-array",
"display": true
},
{
"label": "Persist State",
"name": "startPersistState",
"type": "boolean",
"description": "Persist the state in the same session",
"optional": true,
"id": "startAgentflow_0-input-startPersistState-boolean",
"display": true
}
],
"inputAnchors": [],
"inputs": {
"startInputType": "chatInput",
"formTitle": "",
"formDescription": "",
"formInputTypes": "",
"startEphemeralMemory": "",
"startState": "",
"startPersistState": ""
},
"outputAnchors": [
{
"id": "startAgentflow_0-output-startAgentflow",
"label": "Start",
"name": "startAgentflow"
}
],
"outputs": {},
"selected": false
},
"width": 103,
"height": 66,
"selected": false,
"positionAbsolute": {
"x": -49,
"y": 103
},
"dragging": false
},
{
"id": "conditionAgentAgentflow_0",
"position": {
"x": 125.25,
"y": 87.75
},
"data": {
"id": "conditionAgentAgentflow_0",
"label": "Router",
"version": 1.1,
"name": "conditionAgentAgentflow",
"type": "ConditionAgent",
"color": "#ff8fab",
"baseClasses": [
"ConditionAgent"
],
"category": "Agent Flows",
"description": "Utilize an agent to split flows based on dynamic conditions",
"inputParams": [
{
"label": "Model",
"name": "conditionAgentModel",
"type": "asyncOptions",
"loadMethod": "listModels",
"loadConfig": true,
"id": "conditionAgentAgentflow_0-input-conditionAgentModel-asyncOptions",
"display": true
},
{
"label": "Instructions",
"name": "conditionAgentInstructions",
"type": "string",
"description": "A general instructions of what the condition agent should do",
"rows": 4,
"acceptVariable": true,
"placeholder": "Determine if the user is interested in learning about AI",
"id": "conditionAgentAgentflow_0-input-conditionAgentInstructions-string",
"display": true
},
{
"label": "Input",
"name": "conditionAgentInput",
"type": "string",
"description": "Input to be used for the condition agent",
"rows": 4,
"acceptVariable": true,
"default": "
<span class="variable" data-type="mention" data-id="question" data-label="question">{{ question }}
","id": "conditionAgentAgentflow_0-input-conditionAgentInput-string",
"display": true
},
{
"label": "Scenarios",
"name": "conditionAgentScenarios",
"description": "Define the scenarios that will be used as the conditions to split the flow",
"type": "array",
"array": [
{
"label": "Scenario",
"name": "scenario",
"type": "string",
"placeholder": "User is asking for a pizza"
}
],
"default": [
{
"scenario": "GREETING: The user is initiating a conversation (e.g., "Hello," "Hi," "Good morning") without asking a specific question."
},
{
"scenario": "FTAP_INQUIRY: The user is asking about our infrastructure, tower counts (~5,000), BTS sites (800+), our parent company (Pinnacle/KKR), DICT licensing, or our role in the Philippine telecom economy."
}
],
"id": "conditionAgentAgentflow_0-input-conditionAgentScenarios-array",
"display": true
},
{
"label": "Override System Prompt",
"name": "conditionAgentOverrideSystemPrompt",
"type": "boolean",
"description": "Override initial system prompt for Condition Agent",
"optional": true,
"id": "conditionAgentAgentflow_0-input-conditionAgentOverrideSystemPrompt-boolean",
"display": true
},
{
"label": "Node System Prompt",
"name": "conditionAgentSystemPrompt",
"type": "string",
"rows": 4,
"optional": true,
"acceptVariable": true,
"default": "
You are part of a multi-agent system designed to make agent coordination and execution easy. Your task is to analyze the given input and select one matching scenario from a provided set of scenarios.
\n- \n
- Input: A string representing the user's query, message or data. \n
- Scenarios: A list of predefined scenarios that relate to the input. \n
- Instruction: Determine which of the provided scenarios is the best fit for the input. \n
Steps
\n- \n
- Read the input string and the list of scenarios. \n
- Analyze the content of the input to identify its main topic or intention. \n
- Compare the input with each scenario: Evaluate how well the input's topic or intention aligns with each of the provided scenarios and select the one that is the best fit. \n
- Output the result: Return the selected scenario in the specified JSON format. \n
Output Format
\nOutput should be a JSON object that names the selected scenario, like this: {"output": "<selected_scenario_name>"}. No explanation is needed.
Examples
\n- \n
- \n
Input:
\n{"input": "Hello", "scenarios": ["user is asking about AI", "user is not asking about AI"], "instruction": "Your task is to check if the user is asking about AI."}Output:
\n{"output": "user is not asking about AI"}\n - \n
Input:
\n{"input": "What is AIGC?", "scenarios": ["user is asking about AI", "user is asking about the weather"], "instruction": "Your task is to check and see if the user is asking a topic about AI."}Output:
\n{"output": "user is asking about AI"}\n - \n
Input:
\n{"input": "Can you explain deep learning?", "scenarios": ["user is interested in AI topics", "user wants to order food"], "instruction": "Determine if the user is interested in learning about AI."}Output:
\n{"output": "user is interested in AI topics"}\n
Note
\n- \n
- Ensure that the input scenarios align well with potential user queries for accurate matching. \n
- DO NOT include anything other than the JSON in your response. \n
"description": "Expert use only. Modifying this can significantly alter agent behavior. Leave default if unsure",
"show": {
"conditionAgentOverrideSystemPrompt": true
},
"id": "conditionAgentAgentflow_0-input-conditionAgentSystemPrompt-string",
"display": false
}
],
"inputAnchors": [],
"inputs": {
"conditionAgentModel": "chatOpenAI",
"conditionAgentInstructions": "
Role: You are the Lead Dispatcher for the Frontier Tower Associates Philippines (FTAP) AI Assistant. Your goal is to classify user intent into one of three categories to ensure the user receives the correct response.
Context: FTAP is the Philippines’ largest cell tower infrastructure company (~5,000 towers). We are a subsidiary of Pinnacle Towers (backed by KKR) and licensed by the DICT to build and operate telecom infrastructure, including built-to-suit (BTS) sites.
","conditionAgentInput": "
<span class="variable" data-type="mention" data-id="question" data-label="question">{{ question }}
","conditionAgentScenarios": [
{
"scenario": "GREETING: The user is initiating a conversation (e.g., "Hello," "Hi," "Good morning") without asking a specific question."
},
{
"scenario": "FTAP_INQUIRY: The user is asking about our infrastructure, tower counts (~5,000), BTS sites (800+), our parent company (Pinnacle/KKR), DICT licensing, or our role in the Philippine telecom economy."
},
{
"scenario": "OUT_OF_SCOPE: The user is asking about topics unrelated to FTAP, such as personal tech support, general Philippine news, or unrelated industries."
}
],
"conditionAgentOverrideSystemPrompt": "",
"conditionAgentModelConfig": {
"cache": "",
"modelName": "gpt-4o-mini",
"temperature": "0.2",
"streaming": false,
"maxTokens": "500",
"topP": "",
"frequencyPenalty": "",
"presencePenalty": "",
"timeout": "",
"strictToolCalling": "",
"stopSequence": "",
"basepath": "",
"proxyUrl": "",
"baseOptions": "",
"allowImageUploads": "",
"reasoning": "",
"conditionAgentModel": "chatOpenAI"
}
},
"outputAnchors": [
{
"id": "conditionAgentAgentflow_0-output-0",
"label": 0,
"name": 0,
"description": "Condition 0"
},
{
"id": "conditionAgentAgentflow_0-output-1",
"label": 1,
"name": 1,
"description": "Condition 1"
},
{
"id": "conditionAgentAgentflow_0-output-2",
"label": 2,
"name": 2,
"description": "Condition 2"
}
],
"outputs": {
"conditionAgentAgentflow": ""
},
"selected": false
},
"type": "agentFlow",
"width": 175,
"height": 100,
"selected": false,
"dragging": false,
"positionAbsolute": {
"x": 125.25,
"y": 87.75
}
},
{
"id": "directReplyAgentflow_0",
"position": {
"x": 375.313348426668,
"y": 2.763441650205607
},
"data": {
"id": "directReplyAgentflow_0",
"label": "Greeting",
"version": 1,
"name": "directReplyAgentflow",
"type": "DirectReply",
"color": "#4DDBBB",
"hideOutput": true,
"baseClasses": [
"DirectReply"
],
"category": "Agent Flows",
"description": "Directly reply to the user with a message",
"inputParams": [
{
"label": "Message",
"name": "directReplyMessage",
"type": "string",
"rows": 4,
"acceptVariable": true,
"id": "directReplyAgentflow_0-input-directReplyMessage-string",
"display": true
}
],
"inputAnchors": [],
"inputs": {
"directReplyMessage": "
Hello! Welcome to Frontier Tower Associates Philippines (FTAP). As the leading provider of telecommunications infrastructure in the country, we’re here to help with questions about our tower network and connectivity solutions. How can we assist you today?
","undefined": ""
},
"outputAnchors": [],
"outputs": {},
"selected": false
},
"type": "agentFlow",
"width": 128,
"height": 66,
"selected": false,
"positionAbsolute": {
"x": 375.313348426668,
"y": 2.763441650205607
},
"dragging": false
},
{
"id": "directReplyAgentflow_1",
"position": {
"x": 376.38667820862844,
"y": 210.34661961097626
},
"data": {
"id": "directReplyAgentflow_1",
"label": "Out of scope",
"version": 1,
"name": "directReplyAgentflow",
"type": "DirectReply",
"color": "#4DDBBB",
"hideOutput": true,
"baseClasses": [
"DirectReply"
],
"category": "Agent Flows",
"description": "Directly reply to the user with a message",
"inputParams": [
{
"label": "Message",
"name": "directReplyMessage",
"type": "string",
"rows": 4,
"acceptVariable": true,
"id": "directReplyAgentflow_1-input-directReplyMessage-string",
"display": true
}
],
"inputAnchors": [],
"inputs": {
"directReplyMessage": "
Thank you for reaching out. Currently, I am only able to assist with inquiries related to Frontier Tower Associates Philippines (FTAP), such as our tower infrastructure, BTS sites, or our role in the Philippine telecom industry.
Unfortunately, I cannot provide information on your question. Is there anything regarding our 5,000+ tower network I can help you with instead?
"},
"outputAnchors": [],
"outputs": {},
"selected": false
},
"type": "agentFlow",
"width": 155,
"height": 66,
"selected": false,
"positionAbsolute": {
"x": 376.38667820862844,
"y": 210.34661961097626
},
"dragging": false
},
{
"id": "conditionAgentAgentflow_1",
"position": {
"x": 376.75039952944684,
"y": 95.12330314666401
},
"data": {
"id": "conditionAgentAgentflow_1",
"label": "Data Retrieval Classifier",
"version": 1.1,
"name": "conditionAgentAgentflow",
"type": "ConditionAgent",
"color": "#ff8fab",
"baseClasses": [
"ConditionAgent"
],
"category": "Agent Flows",
"description": "Utilize an agent to split flows based on dynamic conditions",
"inputParams": [
{
"label": "Model",
"name": "conditionAgentModel",
"type": "asyncOptions",
"loadMethod": "listModels",
"loadConfig": true,
"id": "conditionAgentAgentflow_1-input-conditionAgentModel-asyncOptions",
"display": true
},
{
"label": "Instructions",
"name": "conditionAgentInstructions",
"type": "string",
"description": "A general instructions of what the condition agent should do",
"rows": 4,
"acceptVariable": true,
"placeholder": "Determine if the user is interested in learning about AI",
"id": "conditionAgentAgentflow_1-input-conditionAgentInstructions-string",
"display": true
},
{
"label": "Input",
"name": "conditionAgentInput",
"type": "string",
"description": "Input to be used for the condition agent",
"rows": 4,
"acceptVariable": true,
"default": "
<span class="variable" data-type="mention" data-id="question" data-label="question">{{ question }}
","id": "conditionAgentAgentflow_1-input-conditionAgentInput-string",
"display": true
},
{
"label": "Scenarios",
"name": "conditionAgentScenarios",
"description": "Define the scenarios that will be used as the conditions to split the flow",
"type": "array",
"array": [
{
"label": "Scenario",
"name": "scenario",
"type": "string",
"placeholder": "User is asking for a pizza"
}
],
"default": [
{
"scenario": "DB_QUERY: Requires specific, granular, or real-time data."
},
{
"scenario": "STATIC_INFO: General company background or industry facts already in the context."
}
],
"id": "conditionAgentAgentflow_1-input-conditionAgentScenarios-array",
"display": true
},
{
"label": "Override System Prompt",
"name": "conditionAgentOverrideSystemPrompt",
"type": "boolean",
"description": "Override initial system prompt for Condition Agent",
"optional": true,
"id": "conditionAgentAgentflow_1-input-conditionAgentOverrideSystemPrompt-boolean",
"display": true
},
{
"label": "Node System Prompt",
"name": "conditionAgentSystemPrompt",
"type": "string",
"rows": 4,
"optional": true,
"acceptVariable": true,
"default": "
You are part of a multi-agent system designed to make agent coordination and execution easy. Your task is to analyze the given input and select one matching scenario from a provided set of scenarios.
\n- \n
- Input: A string representing the user's query, message or data. \n
- Scenarios: A list of predefined scenarios that relate to the input. \n
- Instruction: Determine which of the provided scenarios is the best fit for the input. \n
Steps
\n- \n
- Read the input string and the list of scenarios. \n
- Analyze the content of the input to identify its main topic or intention. \n
- Compare the input with each scenario: Evaluate how well the input's topic or intention aligns with each of the provided scenarios and select the one that is the best fit. \n
- Output the result: Return the selected scenario in the specified JSON format. \n
Output Format
\nOutput should be a JSON object that names the selected scenario, like this: {"output": "<selected_scenario_name>"}. No explanation is needed.
Examples
\n- \n
- \n
Input:
\n{"input": "Hello", "scenarios": ["user is asking about AI", "user is not asking about AI"], "instruction": "Your task is to check if the user is asking about AI."}Output:
\n{"output": "user is not asking about AI"}\n - \n
Input:
\n{"input": "What is AIGC?", "scenarios": ["user is asking about AI", "user is asking about the weather"], "instruction": "Your task is to check and see if the user is asking a topic about AI."}Output:
\n{"output": "user is asking about AI"}\n - \n
Input:
\n{"input": "Can you explain deep learning?", "scenarios": ["user is interested in AI topics", "user wants to order food"], "instruction": "Determine if the user is interested in learning about AI."}Output:
\n{"output": "user is interested in AI topics"}\n
Note
\n- \n
- Ensure that the input scenarios align well with potential user queries for accurate matching. \n
- DO NOT include anything other than the JSON in your response. \n
"description": "Expert use only. Modifying this can significantly alter agent behavior. Leave default if unsure",
"show": {
"conditionAgentOverrideSystemPrompt": true
},
"id": "conditionAgentAgentflow_1-input-conditionAgentSystemPrompt-string",
"display": false
}
],
"inputAnchors": [],
"inputs": {
"conditionAgentModel": "chatOpenAI",
"conditionAgentInstructions": "
Determine if user inquiry requires a Database (DB) Query or a Static Response.
","conditionAgentInput": "
<span class="variable" data-type="mention" data-id="question" data-label="question">{{ question }}
","conditionAgentScenarios": [
{
"scenario": "DB_QUERY: Requires specific, granular, or real-time data."
},
{
"scenario": "STATIC_INFO: General company background or industry facts already in the context."
}
],
"conditionAgentOverrideSystemPrompt": false,
"undefined": "",
"conditionAgentModelConfig": {
"cache": "",
"modelName": "gpt-4o-mini",
"temperature": "0.1",
"streaming": true,
"maxTokens": "500",
"topP": "",
"frequencyPenalty": "",
"presencePenalty": "",
"timeout": "",
"strictToolCalling": "",
"stopSequence": "",
"basepath": "",
"proxyUrl": "",
"baseOptions": "",
"allowImageUploads": "",
"imageResolution": "low",
"reasoning": "",
"reasoningEffort": "",
"reasoningSummary": "",
"conditionAgentModel": "chatOpenAI"
}
},
"outputAnchors": [
{
"id": "conditionAgentAgentflow_1-output-0",
"label": 0,
"name": 0,
"description": "Condition 0"
},
{
"id": "conditionAgentAgentflow_1-output-1",
"label": 1,
"name": 1,
"description": "Condition 1"
}
],
"outputs": {
"conditionAgentAgentflow": ""
},
"selected": false
},
"type": "agentFlow",
"width": 227,
"height": 80,
"positionAbsolute": {
"x": 376.75039952944684,
"y": 95.12330314666401
},
"selected": false,
"dragging": false
},
{
"id": "agentAgentflow_0",
"position": {
"x": 685.5499733647034,
"y": 44.9433723984348
},
"data": {
"id": "agentAgentflow_0",
"label": "SQL Agent",
"version": 2.2,
"name": "agentAgentflow",
"type": "Agent",
"color": "#4DD0E1",
"baseClasses": [
"Agent"
],
"category": "Agent Flows",
"description": "Dynamically choose and utilize tools during runtime, enabling multi-step reasoning",
"inputParams": [
{
"label": "Model",
"name": "agentModel",
"type": "asyncOptions",
"loadMethod": "listModels",
"loadConfig": true,
"id": "agentAgentflow_0-input-agentModel-asyncOptions",
"display": true
},
{
"label": "Messages",
"name": "agentMessages",
"type": "array",
"optional": true,
"acceptVariable": true,
"array": [
{
"label": "Role",
"name": "role",
"type": "options",
"options": [
{
"label": "System",
"name": "system"
},
{
"label": "Assistant",
"name": "assistant"
},
{
"label": "Developer",
"name": "developer"
},
{
"label": "User",
"name": "user"
}
]
},
{
"label": "Content",
"name": "content",
"type": "string",
"acceptVariable": true,
"generateInstruction": true,
"rows": 4
}
],
"id": "agentAgentflow_0-input-agentMessages-array",
"display": true
},
{
"label": "OpenAI Built-in Tools",
"name": "agentToolsBuiltInOpenAI",
"type": "multiOptions",
"optional": true,
"options": [
{
"label": "Web Search",
"name": "web_search_preview",
"description": "Search the web for the latest information"
},
{
"label": "Code Interpreter",
"name": "code_interpreter",
"description": "Write and run Python code in a sandboxed environment"
},
{
"label": "Image Generation",
"name": "image_generation",
"description": "Generate images based on a text prompt"
}
],
"show": {
"agentModel": "chatOpenAI"
},
"id": "agentAgentflow_0-input-agentToolsBuiltInOpenAI-multiOptions",
"display": true
},
{
"label": "Gemini Built-in Tools",
"name": "agentToolsBuiltInGemini",
"type": "multiOptions",
"optional": true,
"options": [
{
"label": "URL Context",
"name": "urlContext",
"description": "Extract content from given URLs"
},
{
"label": "Google Search",
"name": "googleSearch",
"description": "Search real-time web content"
}
],
"show": {
"agentModel": "chatGoogleGenerativeAI"
},
"id": "agentAgentflow_0-input-agentToolsBuiltInGemini-multiOptions",
"display": false
},
{
"label": "Anthropic Built-in Tools",
"name": "agentToolsBuiltInAnthropic",
"type": "multiOptions",
"optional": true,
"options": [
{
"label": "Web Search",
"name": "web_search_20250305",
"description": "Search the web for the latest information"
},
{
"label": "Web Fetch",
"name": "web_fetch_20250910",
"description": "Retrieve full content from specified web pages"
}
],
"show": {
"agentModel": "chatAnthropic"
},
"id": "agentAgentflow_0-input-agentToolsBuiltInAnthropic-multiOptions",
"display": false
},
{
"label": "Tools",
"name": "agentTools",
"type": "array",
"optional": true,
"array": [
{
"label": "Tool",
"name": "agentSelectedTool",
"type": "asyncOptions",
"loadMethod": "listTools",
"loadConfig": true
},
{
"label": "Require Human Input",
"name": "agentSelectedToolRequiresHumanInput",
"type": "boolean",
"optional": true
}
],
"id": "agentAgentflow_0-input-agentTools-array",
"display": true
},
{
"label": "Knowledge (Document Stores)",
"name": "agentKnowledgeDocumentStores",
"type": "array",
"description": "Give your agent context about different document sources. Document stores must be upserted in advance.",
"array": [
{
"label": "Document Store",
"name": "documentStore",
"type": "asyncOptions",
"loadMethod": "listStores"
},
{
"label": "Describe Knowledge",
"name": "docStoreDescription",
"type": "string",
"generateDocStoreDescription": true,
"placeholder": "Describe what the knowledge base is about, this is useful for the AI to know when and how to search for correct information",
"rows": 4
},
{
"label": "Return Source Documents",
"name": "returnSourceDocuments",
"type": "boolean",
"optional": true
}
],
"optional": true,
"id": "agentAgentflow_0-input-agentKnowledgeDocumentStores-array",
"display": true
},
{
"label": "Knowledge (Vector Embeddings)",
"name": "agentKnowledgeVSEmbeddings",
"type": "array",
"description": "Give your agent context about different document sources from existing vector stores and embeddings",
"array": [
{
"label": "Vector Store",
"name": "vectorStore",
"type": "asyncOptions",
"loadMethod": "listVectorStores",
"loadConfig": true
},
{
"label": "Embedding Model",
"name": "embeddingModel",
"type": "asyncOptions",
"loadMethod": "listEmbeddings",
"loadConfig": true
},
{
"label": "Knowledge Name",
"name": "knowledgeName",
"type": "string",
"placeholder": "A short name for the knowledge base, this is useful for the AI to know when and how to search for correct information"
},
{
"label": "Describe Knowledge",
"name": "knowledgeDescription",
"type": "string",
"placeholder": "Describe what the knowledge base is about, this is useful for the AI to know when and how to search for correct information",
"rows": 4
},
{
"label": "Return Source Documents",
"name": "returnSourceDocuments",
"type": "boolean",
"optional": true
}
],
"optional": true,
"id": "agentAgentflow_0-input-agentKnowledgeVSEmbeddings-array",
"display": true
},
{
"label": "Enable Memory",
"name": "agentEnableMemory",
"type": "boolean",
"description": "Enable memory for the conversation thread",
"default": true,
"optional": true,
"id": "agentAgentflow_0-input-agentEnableMemory-boolean",
"display": true
},
{
"label": "Memory Type",
"name": "agentMemoryType",
"type": "options",
"options": [
{
"label": "All Messages",
"name": "allMessages",
"description": "Retrieve all messages from the conversation"
},
{
"label": "Window Size",
"name": "windowSize",
"description": "Uses a fixed window size to surface the last N messages"
},
{
"label": "Conversation Summary",
"name": "conversationSummary",
"description": "Summarizes the whole conversation"
},
{
"label": "Conversation Summary Buffer",
"name": "conversationSummaryBuffer",
"description": "Summarize conversations once token limit is reached. Default to 2000"
}
],
"optional": true,
"default": "allMessages",
"show": {
"agentEnableMemory": true
},
"id": "agentAgentflow_0-input-agentMemoryType-options",
"display": true
},
{
"label": "Window Size",
"name": "agentMemoryWindowSize",
"type": "number",
"default": "20",
"description": "Uses a fixed window size to surface the last N messages",
"show": {
"agentMemoryType": "windowSize"
},
"id": "agentAgentflow_0-input-agentMemoryWindowSize-number",
"display": false
},
{
"label": "Max Token Limit",
"name": "agentMemoryMaxTokenLimit",
"type": "number",
"default": "2000",
"description": "Summarize conversations once token limit is reached. Default to 2000",
"show": {
"agentMemoryType": "conversationSummaryBuffer"
},
"id": "agentAgentflow_0-input-agentMemoryMaxTokenLimit-number",
"display": false
},
{
"label": "Input Message",
"name": "agentUserMessage",
"type": "string",
"description": "Add an input message as user message at the end of the conversation",
"rows": 4,
"optional": true,
"acceptVariable": true,
"show": {
"agentEnableMemory": true
},
"id": "agentAgentflow_0-input-agentUserMessage-string",
"display": true
},
{
"label": "Return Response As",
"name": "agentReturnResponseAs",
"type": "options",
"options": [
{
"label": "User Message",
"name": "userMessage"
},
{
"label": "Assistant Message",
"name": "assistantMessage"
}
],
"default": "userMessage",
"id": "agentAgentflow_0-input-agentReturnResponseAs-options",
"display": true
},
{
"label": "Update Flow State",
"name": "agentUpdateState",
"description": "Update runtime state during the execution of the workflow",
"type": "array",
"optional": true,
"acceptVariable": true,
"array": [
{
"label": "Key",
"name": "key",
"type": "asyncOptions",
"loadMethod": "listRuntimeStateKeys",
"freeSolo": true
},
{
"label": "Value",
"name": "value",
"type": "string",
"acceptVariable": true,
"acceptNodeOutputAsVariable": true
}
],
"id": "agentAgentflow_0-input-agentUpdateState-array",
"display": true
}
],
"inputAnchors": [],
"inputs": {
"agentModel": "chatOpenAI",
"agentMessages": [
{
"role": "system",
"content": "
Role: SQL Generator for Frontier Tower Associates Philippines (FTAP). Task: Convert user questions into SQL for FTAP_Tower_DB.
Schema: Table towers: tower_id, site_name, region, province, city, latitude, longitude, height_m, status (Active, Under Construction, Planned). Table site_types: tower_id, is_bts (Boolean), activation_date, tenant_count.
Rules:
Use specific column names. Do not use SELECT ALL.
Return only the SQL code.
Only SELECT operations allowed.
For BTS or Built-to-Suit, filter by is_bts = TRUE.
Use LIKE %text% for city or region searches.
Example: User: How many active towers in Cebu? SQL: SELECT COUNT(tower_id) FROM towers WHERE city LIKE '%Cebu%' AND status = 'Active';
"}
],
"agentToolsBuiltInOpenAI": "",
"agentTools": "",
"agentKnowledgeDocumentStores": "",
"agentKnowledgeVSEmbeddings": "",
"agentEnableMemory": true,
"agentMemoryType": "allMessages",
"agentUserMessage": "",
"agentReturnResponseAs": "userMessage",
"agentUpdateState": "",
"agentModelConfig": {
"cache": "",
"modelName": "gpt-4o-mini",
"temperature": "0.4",
"streaming": true,
"maxTokens": "1000",
"topP": "",
"frequencyPenalty": "",
"presencePenalty": "",
"timeout": "",
"strictToolCalling": "",
"stopSequence": "",
"basepath": "",
"proxyUrl": "",
"baseOptions": "",
"allowImageUploads": "",
"imageResolution": "low",
"reasoning": "",
"reasoningEffort": "",
"reasoningSummary": "",
"agentModel": "chatOpenAI"
}
},
"outputAnchors": [
{
"id": "agentAgentflow_0-output-agentAgentflow",
"label": "Agent",
"name": "agentAgentflow"
}
],
"outputs": {},
"selected": false
},
"type": "agentFlow",
"width": 175,
"height": 72,
"positionAbsolute": {
"x": 685.5499733647034,
"y": 44.9433723984348
},
"selected": false,
"dragging": false
},
{
"id": "llmAgentflow_0",
"position": {
"x": 688.1233031466639,
"y": 142.08657166744263
},
"data": {
"id": "llmAgentflow_0",
"label": "LLM Assistant",
"version": 1,
"name": "llmAgentflow",
"type": "LLM",
"color": "#64B5F6",
"baseClasses": [
"LLM"
],
"category": "Agent Flows",
"description": "Large language models to analyze user-provided inputs and generate responses",
"inputParams": [
{
"label": "Model",
"name": "llmModel",
"type": "asyncOptions",
"loadMethod": "listModels",
"loadConfig": true,
"id": "llmAgentflow_0-input-llmModel-asyncOptions",
"display": true
},
{
"label": "Messages",
"name": "llmMessages",
"type": "array",
"optional": true,
"acceptVariable": true,
"array": [
{
"label": "Role",
"name": "role",
"type": "options",
"options": [
{
"label": "System",
"name": "system"
},
{
"label": "Assistant",
"name": "assistant"
},
{
"label": "Developer",
"name": "developer"
},
{
"label": "User",
"name": "user"
}
]
},
{
"label": "Content",
"name": "content",
"type": "string",
"acceptVariable": true,
"generateInstruction": true,
"rows": 4
}
],
"id": "llmAgentflow_0-input-llmMessages-array",
"display": true
},
{
"label": "Enable Memory",
"name": "llmEnableMemory",
"type": "boolean",
"description": "Enable memory for the conversation thread",
"default": true,
"optional": true,
"id": "llmAgentflow_0-input-llmEnableMemory-boolean",
"display": true
},
{
"label": "Memory Type",
"name": "llmMemoryType",
"type": "options",
"options": [
{
"label": "All Messages",
"name": "allMessages",
"description": "Retrieve all messages from the conversation"
},
{
"label": "Window Size",
"name": "windowSize",
"description": "Uses a fixed window size to surface the last N messages"
},
{
"label": "Conversation Summary",
"name": "conversationSummary",
"description": "Summarizes the whole conversation"
},
{
"label": "Conversation Summary Buffer",
"name": "conversationSummaryBuffer",
"description": "Summarize conversations once token limit is reached. Default to 2000"
}
],
"optional": true,
"default": "allMessages",
"show": {
"llmEnableMemory": true
},
"id": "llmAgentflow_0-input-llmMemoryType-options",
"display": true
},
{
"label": "Window Size",
"name": "llmMemoryWindowSize",
"type": "number",
"default": "20",
"description": "Uses a fixed window size to surface the last N messages",
"show": {
"llmMemoryType": "windowSize"
},
"id": "llmAgentflow_0-input-llmMemoryWindowSize-number",
"display": false
},
{
"label": "Max Token Limit",
"name": "llmMemoryMaxTokenLimit",
"type": "number",
"default": "2000",
"description": "Summarize conversations once token limit is reached. Default to 2000",
"show": {
"llmMemoryType": "conversationSummaryBuffer"
},
"id": "llmAgentflow_0-input-llmMemoryMaxTokenLimit-number",
"display": false
},
{
"label": "Input Message",
"name": "llmUserMessage",
"type": "string",
"description": "Add an input message as user message at the end of the conversation",
"rows": 4,
"optional": true,
"acceptVariable": true,
"show": {
"llmEnableMemory": true
},
"id": "llmAgentflow_0-input-llmUserMessage-string",
"display": true
},
{
"label": "Return Response As",
"name": "llmReturnResponseAs",
"type": "options",
"options": [
{
"label": "User Message",
"name": "userMessage"
},
{
"label": "Assistant Message",
"name": "assistantMessage"
}
],
"default": "userMessage",
"id": "llmAgentflow_0-input-llmReturnResponseAs-options",
"display": true
},
{
"label": "JSON Structured Output",
"name": "llmStructuredOutput",
"description": "Instruct the LLM to give output in a JSON structured schema",
"type": "array",
"optional": true,
"acceptVariable": true,
"array": [
{
"label": "Key",
"name": "key",
"type": "string"
},
{
"label": "Type",
"name": "type",
"type": "options",
"options": [
{
"label": "String",
"name": "string"
},
{
"label": "String Array",
"name": "stringArray"
},
{
"label": "Number",
"name": "number"
},
{
"label": "Boolean",
"name": "boolean"
},
{
"label": "Enum",
"name": "enum"
},
{
"label": "JSON Array",
"name": "jsonArray"
}
]
},
{
"label": "Enum Values",
"name": "enumValues",
"type": "string",
"placeholder": "value1, value2, value3",
"description": "Enum values. Separated by comma",
"optional": true,
"show": {
"llmStructuredOutput[$index].type": "enum"
}
},
{
"label": "JSON Schema",
"name": "jsonSchema",
"type": "code",
"placeholder": "{\n "answer": {\n "type": "string",\n "description": "Value of the answer"\n },\n "reason": {\n "type": "string",\n "description": "Reason for the answer"\n },\n "optional": {\n "type": "boolean"\n },\n "count": {\n "type": "number"\n },\n "children": {\n "type": "array",\n "items": {\n "type": "object",\n "properties": {\n "value": {\n "type": "string",\n "description": "Value of the children's answer"\n }\n }\n }\n }\n}",
"description": "JSON schema for the structured output",
"optional": true,
"hideCodeExecute": true,
"show": {
"llmStructuredOutput[$index].type": "jsonArray"
}
},
{
"label": "Description",
"name": "description",
"type": "string",
"placeholder": "Description of the key"
}
],
"id": "llmAgentflow_0-input-llmStructuredOutput-array",
"display": true
},
{
"label": "Update Flow State",
"name": "llmUpdateState",
"description": "Update runtime state during the execution of the workflow",
"type": "array",
"optional": true,
"acceptVariable": true,
"array": [
{
"label": "Key",
"name": "key",
"type": "asyncOptions",
"loadMethod": "listRuntimeStateKeys",
"freeSolo": true
},
{
"label": "Value",
"name": "value",
"type": "string",
"acceptVariable": true,
"acceptNodeOutputAsVariable": true
}
],
"id": "llmAgentflow_0-input-llmUpdateState-array",
"display": true
}
],
"inputAnchors": [],
"inputs": {
"llmModel": "chatOpenAI",
"llmMessages": "",
"llmEnableMemory": true,
"llmMemoryType": "allMessages",
"llmUserMessage": "
Answer the <span class="variable" data-type="mention" data-id="question" data-label="question">{{ question }}
","llmReturnResponseAs": "userMessage",
"llmStructuredOutput": "",
"llmUpdateState": "",
"llmModelConfig": {
"cache": "",
"modelName": "gpt-4o-mini",
"temperature": "0.2",
"streaming": true,
"maxTokens": "1000",
"topP": "",
"frequencyPenalty": "",
"presencePenalty": "",
"timeout": "",
"strictToolCalling": "",
"stopSequence": "",
"basepath": "",
"proxyUrl": "",
"baseOptions": "",
"allowImageUploads": "",
"imageResolution": "low",
"reasoning": "",
"reasoningEffort": "",
"reasoningSummary": "",
"llmModel": "chatOpenAI"
}
},
"outputAnchors": [
{
"id": "llmAgentflow_0-output-llmAgentflow",
"label": "LLM",
"name": "llmAgentflow"
}
],
"outputs": {},
"selected": false
},
"type": "agentFlow",
"width": 175,
"height": 72,
"positionAbsolute": {
"x": 688.1233031466639,
"y": 142.08657166744263
},
"selected": false,
"dragging": false
},
{
"id": "directReplyAgentflow_2",
"position": {
"x": 916.779152469303,
"y": 135.81767726812754
},
"data": {
"id": "directReplyAgentflow_2",
"label": "Answer",
"version": 1,
"name": "directReplyAgentflow",
"type": "DirectReply",
"color": "#4DDBBB",
"hideOutput": true,
"baseClasses": [
"DirectReply"
],
"category": "Agent Flows",
"description": "Directly reply to the user with a message",
"inputParams": [
{
"label": "Message",
"name": "directReplyMessage",
"type": "string",
"rows": 4,
"acceptVariable": true,
"id": "directReplyAgentflow_2-input-directReplyMessage-string",
"display": true
}
],
"inputAnchors": [],
"inputs": {
"directReplyMessage": "
<span class="variable" data-type="mention" data-id="llmAgentflow_0" data-label="llmAgentflow_0">{{ llmAgentflow_0 }}
Runtime: <span class="variable" data-type="mention" data-id="runtime_messages_length" data-label="runtime_messages_length">{{ runtime_messages_length }}
"},
"outputAnchors": [],
"outputs": {},
"selected": false
},
"type": "agentFlow",
"width": 121,
"height": 66,
"positionAbsolute": {
"x": 916.779152469303,
"y": 135.81767726812754
},
"selected": false,
"dragging": false
},
{
"id": "directReplyAgentflow_3",
"position": {
"x": 938.0533177633255,
"y": 13.347482467403543
},
"data": {
"id": "directReplyAgentflow_3",
"label": "SQL Query",
"version": 1,
"name": "directReplyAgentflow",
"type": "DirectReply",
"color": "#4DDBBB",
"hideOutput": true,
"baseClasses": [
"DirectReply"
],
"category": "Agent Flows",
"description": "Directly reply to the user with a message",
"inputParams": [
{
"label": "Message",
"name": "directReplyMessage",
"type": "string",
"rows": 4,
"acceptVariable": true,
"id": "directReplyAgentflow_3-input-directReplyMessage-string",
"display": true
}
],
"inputAnchors": [],
"inputs": {
"directReplyMessage": "
<span class="variable" data-type="mention" data-id="agentAgentflow_0" data-label="agentAgentflow_0">{{ agentAgentflow_0 }}
Runtime:
<span class="variable" data-type="mention" data-id="runtime_messages_length" data-label="runtime_messages_length">{{ runtime_messages_length }}
Loop Count:
<span class="variable" data-type="mention" data-id="loop_count" data-label="loop_count">{{ loop_count }}
},
"outputAnchors": [],
"outputs": {},
"selected": false
},
"type": "agentFlow",
"width": 142,
"height": 66,
"selected": false,
"dragging": false,
"positionAbsolute": {
"x": 938.0533177633255,
"y": 13.347482467403543
}
}
],
"edges": [
{
"source": "startAgentflow_0",
"sourceHandle": "startAgentflow_0-output-startAgentflow",
"target": "conditionAgentAgentflow_0",
"targetHandle": "conditionAgentAgentflow_0",
"data": {
"sourceColor": "#7EE787",
"targetColor": "#ff8fab",
"isHumanInput": false
},
"type": "agentFlow",
"id": "startAgentflow_0-startAgentflow_0-output-startAgentflow-conditionAgentAgentflow_0-conditionAgentAgentflow_0"
},
{
"source": "conditionAgentAgentflow_0",
"sourceHandle": "conditionAgentAgentflow_0-output-2",
"target": "directReplyAgentflow_1",
"targetHandle": "directReplyAgentflow_1",
"data": {
"sourceColor": "#ff8fab",
"targetColor": "#4DDBBB",
"edgeLabel": "2",
"isHumanInput": false
},
"type": "agentFlow",
"id": "conditionAgentAgentflow_0-conditionAgentAgentflow_0-output-2-directReplyAgentflow_1-directReplyAgentflow_1"
},
{
"source": "conditionAgentAgentflow_0",
"sourceHandle": "conditionAgentAgentflow_0-output-1",
"target": "conditionAgentAgentflow_1",
"targetHandle": "conditionAgentAgentflow_1",
"data": {
"sourceColor": "#ff8fab",
"targetColor": "#ff8fab",
"edgeLabel": "1",
"isHumanInput": false
},
"type": "agentFlow",
"id": "conditionAgentAgentflow_0-conditionAgentAgentflow_0-output-1-conditionAgentAgentflow_1-conditionAgentAgentflow_1"
},
{
"source": "conditionAgentAgentflow_0",
"sourceHandle": "conditionAgentAgentflow_0-output-0",
"target": "directReplyAgentflow_0",
"targetHandle": "directReplyAgentflow_0",
"data": {
"sourceColor": "#ff8fab",
"targetColor": "#4DDBBB",
"edgeLabel": "0",
"isHumanInput": false
},
"type": "agentFlow",
"id": "conditionAgentAgentflow_0-conditionAgentAgentflow_0-output-0-directReplyAgentflow_0-directReplyAgentflow_0"
},
{
"source": "conditionAgentAgentflow_1",
"sourceHandle": "conditionAgentAgentflow_1-output-0",
"target": "agentAgentflow_0",
"targetHandle": "agentAgentflow_0",
"data": {
"sourceColor": "#ff8fab",
"targetColor": "#4DD0E1",
"edgeLabel": "0",
"isHumanInput": false
},
"type": "agentFlow",
"id": "conditionAgentAgentflow_1-conditionAgentAgentflow_1-output-0-agentAgentflow_0-agentAgentflow_0"
},
{
"source": "conditionAgentAgentflow_1",
"sourceHandle": "conditionAgentAgentflow_1-output-1",
"target": "llmAgentflow_0",
"targetHandle": "llmAgentflow_0",
"data": {
"sourceColor": "#ff8fab",
"targetColor": "#64B5F6",
"edgeLabel": "1",
"isHumanInput": false
},
"type": "agentFlow",
"id": "conditionAgentAgentflow_1-conditionAgentAgentflow_1-output-1-llmAgentflow_0-llmAgentflow_0"
},
{
"source": "agentAgentflow_0",
"sourceHandle": "agentAgentflow_0-output-agentAgentflow",
"target": "directReplyAgentflow_3",
"targetHandle": "directReplyAgentflow_3",
"data": {
"sourceColor": "#4DD0E1",
"targetColor": "#4DDBBB",
"isHumanInput": false
},
"type": "agentFlow",
"id": "agentAgentflow_0-agentAgentflow_0-output-agentAgentflow-directReplyAgentflow_3-directReplyAgentflow_3"
},
{
"source": "llmAgentflow_0",
"sourceHandle": "llmAgentflow_0-output-llmAgentflow",
"target": "directReplyAgentflow_2",
"targetHandle": "directReplyAgentflow_2",
"data": {
"sourceColor": "#64B5F6",
"targetColor": "#4DDBBB",
"isHumanInput": false
},
"type": "agentFlow",
"id": "llmAgentflow_0-llmAgentflow_0-output-llmAgentflow-directReplyAgentflow_2-directReplyAgentflow_2"
}
]
}
Use Method
Docker
Flowise Version
@flowise 3.0.12
Operating System
Windows
Browser
Chrome
Additional context
No response