You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Step 1: The user initiates a request to the AI application and requests traffic to enter the traffic gateway (cloud native API gateway).
Step 2: The cloud-native API gateway side maintains and manages APIs or routing rules for different types of AI Agents, and forwards user requests to the corresponding AI Agent.
Step 3: No matter which way the AI Agent is implemented, as long as the nodes in it need to obtain data, they will request the MCP Gateway (cloud native API Gateway) to obtain the available MCP Server and MCP Tool information.
Step 4: Because the MCP gateway may maintain a lot of MCP information, the MCP range can be narrowed and token consumption can be reduced with the help of LLM, so we send requests to the AI gateway (cloud-native API gateway) to interact with LLM. (This step is optional)
Step 5: The MCP gateway returns the information list of the MCP Server and MCP Tool that is determined to the AI Agent.
Step 6: The AI Agent sends the user's request information and all MCP information obtained from the MCP gateway to the LLM through the AI gateway.
Step 7: After LLM reasoning, return one or more MCP Server and MCP Tool information that resolves the problem.
Step 8: The AI Agent obtains the confirmed MCP Server and MCP Tool information and makes a request to the MCP Tool through the MCP gateway.
Question: Are steps 3-5 in the attached diagram redundant? Generally, the AI Agent itself has configured the information of MCP Server and Tool. Why not send it directly to the LLM to obtain the corresponding MCP Server and Tool? Judging from the current steps, we need to go through the MCP gateway and the big model gateway. Is the process necessary?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
第一步:用户向 AI 应用发起请求,请求流量进入流量网关(云原生 API 网关)。
第二步:云原生 API 网关侧维护管理了不同类型的 AI Agent 的 API 或路由规则,将用户请求转发至对应的 AI Agent。
第三步:AI Agent 无论以哪种方式实现,只要其中的节点需要获取数据,便向 MCP 网关(云原生 API 网关)请求获取可用的 MCP Server 及 MCP Tool 的信息。
第四步:因为 MCP 网关处可能维护了很多 MCP 信息,可以借助 LLM 缩小MCP 范围,减少 Token 消耗,所以向 AI 网关(云原生 API 网关)发请求和 LLM 交互。(这一步可选)
第五步:MCP 网关将确定好范围的 MCP Server 及 MCP Tool 的信息 List返回给 AI Agent。
第六步:AI Agent 将用户的请求信息及从 MCP 网关拿到的所有 MCP 信息通过 AI 网关发送给 LLM。
第七步:经过 LLM 推理后,返回解决问题的一个或多个 MCP Server 和 MCP Tool 信息。
第八步:AI Agent 拿到确定的 MCP Server 和 MCP Tool 信息后通过 MCP网关对该 MCP Tool 做请求。
问题:附件图中的第3-5步是否是多余的?一般AI Agent本身已经配置了MCP Server和Tool的信息,为什么不是从AI Agent直接发往LLM获取获取对应的MCP Server和Tool?从目前的步骤来看需要经过MCP网关和大模型网关,流程是否有必要?

Step 1: The user initiates a request to the AI application and requests traffic to enter the traffic gateway (cloud native API gateway).
Step 2: The cloud-native API gateway side maintains and manages APIs or routing rules for different types of AI Agents, and forwards user requests to the corresponding AI Agent.
Step 3: No matter which way the AI Agent is implemented, as long as the nodes in it need to obtain data, they will request the MCP Gateway (cloud native API Gateway) to obtain the available MCP Server and MCP Tool information.
Step 4: Because the MCP gateway may maintain a lot of MCP information, the MCP range can be narrowed and token consumption can be reduced with the help of LLM, so we send requests to the AI gateway (cloud-native API gateway) to interact with LLM. (This step is optional)
Step 5: The MCP gateway returns the information list of the MCP Server and MCP Tool that is determined to the AI Agent.
Step 6: The AI Agent sends the user's request information and all MCP information obtained from the MCP gateway to the LLM through the AI gateway.
Step 7: After LLM reasoning, return one or more MCP Server and MCP Tool information that resolves the problem.
Step 8: The AI Agent obtains the confirmed MCP Server and MCP Tool information and makes a request to the MCP Tool through the MCP gateway.
Question: Are steps 3-5 in the attached diagram redundant? Generally, the AI Agent itself has configured the information of MCP Server and Tool. Why not send it directly to the LLM to obtain the corresponding MCP Server and Tool? Judging from the current steps, we need to go through the MCP gateway and the big model gateway. Is the process necessary?

Beta Was this translation helpful? Give feedback.
All reactions