Skip to content

On Windows, _make_subprocess_transport raise NotImplementedError when running MagenticOneGroupChat #5069

Open
@zxh9813

Description

What happened?

Can help on below issue? I just copy and paste example but got below error

File "c:\Users\c8b4bd\AppData\Local\miniforge3\envs\autogen\lib\asyncio\base_events.py", line 498, in _make_subprocess_transport
raise NotImplementedError
NotImplementedError

What did you expect to happen?

There should be no error, right?

How can we reproduce it (as minimally and precisely as possible)?

run below code in jupyter notebook

import asyncio
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.teams import MagenticOneGroupChat
from autogen_agentchat.ui import Console
from autogen_ext.agents.web_surfer import MultimodalWebSurfer

model_client=OpenAIChatCompletionClient(
    model="llama3.2",
    base_url="http://localhost:11434/v1",
    api_key="ollama",
    model_capabilities={
        "vision": True,
        "function_calling": True,
        "json_output": True,
    }
)

async def main() -> None:
    # model_client = OpenAIChatCompletionClient()

    surfer = MultimodalWebSurfer(
        "WebSurfer",
        model_client=model_client,
    )
    team = MagenticOneGroupChat([surfer], model_client=model_client)
    await Console(team.run_stream(task="What is the UV index in Melbourne today?"))


await main()

AutoGen version

0.4

Which package was this bug in

Core

Model used

llama3.2

Python version

3.10.16

Operating system

windows

Any additional info you think would be helpful for fixing this bug

No response

Metadata

Assignees

No one assigned

    Type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions