Skip to content

[Question]: Timeout not working for workflow & MCP-usage #20272

@liebki

Description

@liebki

Question Validation

  • I have searched both the documentation and discord for an answer.

Question

Hi again,

the same problem as a few hours ago.

No matter where or what I set as values for the timeouts, still the MCP-Client which is inside the Agents of llama-index, kills a task after 30 seconds. No matter the value after 30 seconds, the client sends a DELETE to the MCP-Server, because of that the framework eventually tries again, sometimes the agent does, sometimes it doesnt.

The problem is simply, why (in my case, does the framework kill the session, mid-run) even though I set the timeout way higher..?

I also have a basic code of what I am doing / trying, reduced to the problematic stuff, no imports etc.

Code:

async def create_gitlab_mcp_tools():
    """To create a MCP-Client, convert it into a ToolSpec and return it for usage in agent."""
    personal_token_gitlab = os.getenv("GITLAB_PERSONAL_TOKEN")
    mcp_client = BasicMCPClient("http://127.0.0.1:8000/mcp", headers={"Authorization":f"Bearer {personal_token_gitlab}"}, timeout=600)
    
    mcp_tool = McpToolSpec(client=mcp_client)
    return mcp_tool

async def start_workflow(model: LLM):
    mcp_tool_gitlab = await create_gitlab_mcp_tools()
    tool_list_gitlab = await mcp_tool_gitlab.to_tool_list_async()
    
    gitlab_agent = ReActAgent(
        name="GitlabAgent",
        description="Uses the GitLab tools to do several things in GitLab.",
        llm=model,
        tools=tool_list_gitlab,
        verbose=True,
        timeout=600
    )

    orchestrator_agent = ReActAgent(
        name="OrchestratorAgent",
        description="Orchestrates the user requests to the agents to do.",
        llm=model,
        can_handoff_to=["GitlabAgent"],
        verbose=True,
        timeout=600
    )

    agent_workflow = AgentWorkflow(
        agents=[gitlab_agent, orchestrator_agent],
        root_agent=orchestrator_agent.name,
        timeout=600
    )
    
    return agent_workflow


async def main():
    dotenv.load_dotenv()
    
    model = CustomOpenAiLike(
        model="gpt-oss-120b",
        is_chat_model=True,
        timeout=600,
        is_function_calling_model=True,
        max_tokens=128000,
        context_window=128000,
        api_base="https://base-url/v1",
        api_key="x",
        token_refresh_fn=get_new_token,
        system_prompt=BASE_SYSTEM_PROMPT,
    )
    
    memory = ChatMemoryBuffer.from_defaults(llm=model)
    workflow = await start_workflow(model=model)
    context = Context(workflow)
    
    while(True):
        print(Fore.BLUE + "\nPlease input your query.")
        query = input(">> ")
        
        if query in ["q", "quit", "exit"]:
            exit(0)

        resp = await workflow.run(user_msg=query, ctx=context, memory=memory)
        print("Answer of the agent:\n", str(resp))

if __name__ == "__main__":
    asyncio.run(main())

Additionally:

  • The prompt works fine, the model is capable of running tools correctly, I have no problems in that regards.
  • The model is a GPT OSS 120B, again everything works fine on that end.
  • The custom "CustomOpenAiLike" also has nothing to do with it, it's just for authentication renewal, I switched back and forth between the vanilla version.

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionFurther information is requested

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions