Skip to content

[General] When using an Ollama model deployed locally, why doesn't the message output stream? #365

@835479131

Description

@835479131

Description

When using an Ollama model deployed locally, why doesn't the message output stream? use Ollama's API directly for normal streaming output.

Additional Context

OLLAMA_CHAT_MODEL = OllamaChatModel(
model_name="qwen3:30b-a3b-instruct-2507-q4_K_M",
host="http://localhost:11434",
stream=True,
)
# 创建agent实例
agent = ReActAgent(
name="friday",
sys_prompt="you are a helpfull assistant",
model=OLLAMA_CHAT_MODEL,
formatter=OllamaChatFormatter(),
toolkit=toolkit,
memory=AgentScopeSessionHistoryMemory(
service=self.session_service,
session_id=session_id,
user_id=user_id
),
)

agent.set_console_output_enabled(enabled=False)

# 恢复状态(如果存在)
if state:
    agent.load_state_dict(state)

# 流式处理消息
async for msg, last in stream_printing_messages(
        agents=[agent],
        coroutine_task=agent(msgs),
):
    yield msg, last

Metadata

Metadata

Assignees

Labels

bugSomething isn't workingquestionFurther information is requested

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions