Skip to content

Releases: microsoft/autogen

python-v0.5.2

15 Apr 03:24
3500170
Compare
Choose a tag to compare

Python Related Changes

New Contributors

Full Changelog: python-v0.5.1...python-v0.5.2

python-v0.5.1

03 Apr 23:37
47602ea
Compare
Choose a tag to compare

What's New

AgentChat Message Types (Type Hint Changes)

Important

TL;DR: If you are not using custom agents or custom termination conditions, you don't need to change anything.
Otherwise, update AgentEvent to BaseAgentEvent and ChatMessage to BaseChatMessage in your type hints.

This is a breaking change on type hinting only, not on usage.

We updated the message types in AgentChat in this new release.
The purpose of this change is to support custom message types defined by applications.

Previously, message types are fixed and we use the union types ChatMessage and AgentEvent to refer to all the concrete built-in message types.

Now, in the main branch, the message types are organized into hierarchy: existing built-in concrete message types are subclassing either BaseChatMessage and BaseAgentEvent, depending it was part of the ChatMessage or AgentEvent union. We refactored all message handlers on_messages, on_messages_stream, run, run_stream and TerminationCondition to use the base classes in their type hints.

If you are subclassing BaseChatAgent to create your custom agents, or subclassing TerminationCondition to create your custom termination conditions, then you need to rebase the method signatures to use BaseChatMessage and BaseAgentEvent.

If you are using the union types in your existing data structures for serialization and deserialization, then you can keep using those union types to ensure the messages are being handled as concrete types. However, this will not work with custom message types.

Otherwise, your code should just work, as the refactor only makes type hint changes.

This change allows us to support custom message types. For example, we introduced a new message type StructureMessage[T] generic, that can be used to create new message types with a BaseModel content. On-going work is to get AssistantAgent to respond with StructuredMessage[T] where T is the structured output type for the model.

See the API doc on AgentChat message types: https://microsoft.github.io/autogen/stable/reference/python/autogen_agentchat.messages.html

  • Use class hierarchy to organize AgentChat message types and introduce StructuredMessage type by @ekzhu in #5998
  • Rename to use BaseChatMessage and BaseAgentEvent. Bring back union types. by @ekzhu in #6144

Structured Output

We enhanced support for structured output in model clients and agents.

For model clients, use json_output parameter to specify the structured output type
as a Pydantic model. The model client will then return a JSON string
that can be deserialized into the specified Pydantic model.

import asyncio
from typing import Literal

from autogen_core import CancellationToken
from autogen_ext.models.openai import OpenAIChatCompletionClient
from pydantic import BaseModel

# Define the structured output format.
class AgentResponse(BaseModel):
    thoughts: str
    response: Literal["happy", "sad", "neutral"]

 model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")

 # Generate a response using the tool.
response = await model_client.create(
    messages=[
        SystemMessage(content="Analyze input text sentiment using the tool provided."),
        UserMessage(content="I am happy.", source="user"),
    ],
    json_ouput=AgentResponse,
)

print(response.content)
# Should be a structured output.
# {"thoughts": "The user is happy.", "response": "happy"}

For AssistantAgent, you can set output_content_type to the structured output type. The agent will automatically reflect on the tool call result and generate a StructuredMessage with the output content type.

import asyncio
from typing import Literal

from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_agentchat.ui import Console
from autogen_core import CancellationToken
from autogen_core.tools import FunctionTool
from autogen_ext.models.openai import OpenAIChatCompletionClient
from pydantic import BaseModel

# Define the structured output format.
class AgentResponse(BaseModel):
    thoughts: str
    response: Literal["happy", "sad", "neutral"]


# Define the function to be called as a tool.
def sentiment_analysis(text: str) -> str:
    """Given a text, return the sentiment."""
    return "happy" if "happy" in text else "sad" if "sad" in text else "neutral"


# Create a FunctionTool instance with `strict=True`,
# which is required for structured output mode.
tool = FunctionTool(sentiment_analysis, description="Sentiment Analysis", strict=True)

# Create an OpenAIChatCompletionClient instance that supports structured output.
model_client = OpenAIChatCompletionClient(
    model="gpt-4o-mini",
)

# Create an AssistantAgent instance that uses the tool and model client.
agent = AssistantAgent(
    name="assistant",
    model_client=model_client,
    tools=[tool],
    system_message="Use the tool to analyze sentiment.",
    output_content_type=AgentResponse,
)

stream = agent.on_messages_stream([TextMessage(content="I am happy today!", source="user")], CancellationToken())
await Console(stream)
---------- assistant ----------
[FunctionCall(id='call_tIZjAVyKEDuijbBwLY6RHV2p', arguments='{"text":"I am happy today!"}', name='sentiment_analysis')]
---------- assistant ----------
[FunctionExecutionResult(content='happy', call_id='call_tIZjAVyKEDuijbBwLY6RHV2p', is_error=False)]
---------- assistant ----------
{"thoughts":"The user expresses a clear positive emotion by stating they are happy today, suggesting an upbeat mood.","response":"happy"}

You can also pass a StructuredMessage to the run and run_stream methods of agents and teams as task messages. Agents will automatically deserialize the message to string and place them in their model context. StructuredMessage generated by an agent will also be passed to other agents in the team, and emitted as messages in the output stream.

  • Add structured output to model clients by @ekzhu in #5936
  • Support json schema for response format type in OpenAIChatCompletionClient by @ekzhu in #5988
  • Add output_format to AssistantAgent for structured output by @ekzhu in #6071

Azure AI Search Tool

Added a new tool for agents to perform search using Azure AI Search.

See the documentation for more details.

SelectorGroupChat Improvements

  • Implement 'candidate_func' parameter to filter down the pool of candidates for selection by @Ethan0456 in #5954
  • Add async support for selector_func and candidate_func in SelectorGroupChat by @Ethan0456 in #6068

Code Executors Improvements

  • Add cancellation support to docker executor by @ekzhu in #6027
  • Move start() and stop() as interface methods for CodeExecutor by @ekzhu in #6040
  • Changed Code Executors default directory to temporary directory by @federicovilla55 in #6143

Model Client Improvements

  • Improve documentation around model client and tool and how it works under the hood by @ekzhu in #6050
  • Add support for thought field in AzureAIChatCompletionClient by @jay-thakur in #6062
  • Add a thought process analysis, and add a reasoning field in the ModelClientStreamingChunkEvent to distinguish the thought tokens. by @y26s4824k264 in #5989
  • Add thought field support and fix LLM control parameters for OllamaChatCompletionClient by @jay-thakur in #6126
  • Modular Transformer Pipeline and Fix Gemini/Anthropic Empty Content Handling by @SongChiYoung in #6063
  • Doc/moudulor transform oai by @SongChiYoung in #6149
  • Model family resolution to support non-prefixed names like Mistral by @SongChiYoung in #6158

TokenLimitedChatCompletionContext

Introduce TokenLimitedChatCompletionContext to limit the number of tokens in the context
sent to the model.
This is useful for long-running agents that need to keep a long history of messages in the context.

Bug Fixes

Read more

python-v0.4.9.3

29 Mar 04:40
Compare
Choose a tag to compare

Patch Release

This release addresses a bug in MCP Server Tool that causes error when unset tool arguments are set to None and passed on to the server. It also improves the error message from server and adds a default timeout. #6080, #6125

Full Changelog: python-v0.4.9.2...python-v0.4.9.3

autogenstudio-v0.4.2

17 Mar 18:06
8f8ee04
Compare
Choose a tag to compare

What's New

This release makes improvements to AutoGen Studio across multiple areas.

Component Validation and Testing

  • Support Component Validation API in AGS in #5503
  • Test components - #5963
image

In the team builder, all component schemas are automatically validated on save. This way configuration errors (e.g., incorrect provider names) are highlighted early.

In addition, there is a test button for model clients where you can verify the correctness of your model configuration. The LLM is given a simple query and the results are shown.

Gallery Improvements

  • Improved editing UI for tools in AGS by in #5539
  • Anthropic support in AGS #5695

You can now modify teams, agents, models, tools, and termination conditions independently in the UI, and only review JSON when needed. The same UI panel for updating components in team builder is also reused in the Gallery. The Gallery in AGS is now persisted in a database, rather than local storage. Anthropic models supported in AGS.
image

Observability - LLMCallEvents

  • Enable LLM Call Observability in AGS #5457

You can now view all LLMCallEvents in AGS. Go to settings (cog icon on lower left) to enable this feature.

Token Streaming

  • Add Token Streaming in AGS in #5659

For better developer experience, the AGS UI will stream tokens as they are generated by an LLM for any agent where stream_model_client is set to true.

UX Improvements - Session Comparison

  • AGS - Test Model Component in UI, Compare Sessions in #5963

It is often valuable, even critical, to have a side-by-side comparison of multiple agent configurations (e.g., using a team of web agents that solve tasks using a browser or agents with web search API tools). You can now do this using the compare button in the playground, which lets you select multiple sessions and interact with them to compare outputs.

Experimental Features

There are a few interesting but early features that ship with this release:

  • Authentication in AGS: You can pass in an authentication configuration YAML file to enable user authentication for AGS. Currently, only GitHub authentication is supported. This lays the foundation for a multi-user environment (#5928) where various users can login and only view their own sessions. More work needs to be done to clarify isolation of resources (e.g., environment variables) and other security considerations.
    See the documentation for more details.
loginags.mov
image
  • Local Python Code Execution Tool: AGS now has early support for a local Python code execution tool. More work is needed to test the underlying agentchat implementation

Other Fixes

  • Fixed issue with using AzureSQL DB as the database engine for AGS
  • Fixed cascading delete issue in AGS (ensure runs are deleted when sessions are deleted) #5804 by @victordibia
  • Fixed termination UI bug #5888
  • Fixed DockerFile for AGS by @gunt3001 #5932

Thanks to @ekzhu , @jackgerrits , @gagb, @usag1e, @dominiclachance , @EItanya and many others for testing and feedback

python-v0.4.9.2

14 Mar 19:40
Compare
Choose a tag to compare

Patch Fixes

  • Fix logging error in SKChatCompletionAdapter #5893
  • Fix missing system message in the model client call during reflect step when reflect_on_tool_use=True #5926 (Bug introduced in v0.4.8)
  • Fixing listing directory error in FileSurfer #5938

Security Fixes

  • Use SecretStr type for model clients' API key. This will ensure the secret is not exported when calling model_client.dump_component().model_dump_json(). #5939 and #5947. This will affect OpenAIChatCompletionClient and AzureOpenAIChatCompletionClient, and AnthropicChatCompletionClient -- the API keys will no longer be exported when you serialize the model clients. It is recommended to use environment-based or token-based authentication rather than passing the API keys around as data in configs.

Full Changelog: python-v0.4.9...python-v0.4.9.2

python-v0.4.9

12 Mar 07:21
bb8439c
Compare
Choose a tag to compare

What's New

Anthropic Model Client

Native support for Anthropic models. Get your update:
 

pip install -U "autogen-ext[anthropic]"

The new client follows the same interface as OpenAIChatCompletionClient so you can use it directly in your agents and teams.

import asyncio
from autogen_ext.models.anthropic import AnthropicChatCompletionClient
from autogen_core.models import UserMessage


async def main():
    anthropic_client = AnthropicChatCompletionClient(
        model="claude-3-sonnet-20240229",
        api_key="your-api-key",  # Optional if ANTHROPIC_API_KEY is set in environment
    )

    result = await anthropic_client.create([UserMessage(content="What is the capital of France?", source="user")])  # type: ignore
    print(result)


if __name__ == "__main__":
    asyncio.run(main())

You can also load the model client directly from a configuration dictionary:

from autogen_core.models import ChatCompletionClient

config = {
    "provider": "AnthropicChatCompletionClient",
    "config": {"model": "claude-3-sonnet-20240229"},
}

client = ChatCompletionClient.load_component(config)

To use with AssistantAgent and run the agent in a loop to match the behavior of Claude agents, you can use Single-Agent Team.

LlamaCpp Model Client

LlamaCpp is a great project for working with local models. Now we have native support via its official SDK.

pip install -U "autogen-ext[llama-cpp]"

To use a local model file:

import asyncio

from autogen_core.models import UserMessage
from autogen_ext.models.llama_cpp import LlamaCppChatCompletionClient


async def main():
    llama_client = LlamaCppChatCompletionClient(model_path="/path/to/your/model.gguf")
    result = await llama_client.create([UserMessage(content="What is the capital of France?", source="user")])
    print(result)


asyncio.run(main())

To use it with a Hugging Face model:

import asyncio

from autogen_core.models import UserMessage
from autogen_ext.models.llama_cpp import LlamaCppChatCompletionClient


async def main():
    llama_client = LlamaCppChatCompletionClient(
        repo_id="unsloth/phi-4-GGUF", filename="phi-4-Q2_K_L.gguf", n_gpu_layers=-1, seed=1337, n_ctx=5000
    )
    result = await llama_client.create([UserMessage(content="What is the capital of France?", source="user")])
    print(result)


asyncio.run(main())

Task-Centric Memory (Experimental)

Task-Centric memory is an experimental module that can give agents the ability to:

  • Accomplish general tasks more effectively by learning quickly and continually beyond context-window limitations.
  • Remember guidance, corrections, plans, and demonstrations provided by users (teachability)
  • Learn through the agent's own experience and adapt quickly to changing circumstances (self-improvement)
  • Avoid repeating mistakes on tasks that are similar to those previously encountered.

For example, you can use Teachability as a memory for AssistantAgent so your agent can learn from user teaching.

from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.ui import Console
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.experimental.task_centric_memory import MemoryController
from autogen_ext.experimental.task_centric_memory.utils import Teachability


async def main():
    # Create a client
    client = OpenAIChatCompletionClient(model="gpt-4o-2024-08-06", )

    # Create an instance of Task-Centric Memory, passing minimal parameters for this simple example
    memory_controller = MemoryController(reset=False, client=client)

    # Wrap the memory controller in a Teachability instance
    teachability = Teachability(memory_controller=memory_controller)

    # Create an AssistantAgent, and attach teachability as its memory
    assistant_agent = AssistantAgent(
        name="teachable_agent",
        system_message = "You are a helpful AI assistant, with the special ability to remember user teachings from prior conversations.",
        model_client=client,
        memory=[teachability],
    )

    # Enter a loop to chat with the teachable agent
    print("Now chatting with a teachable agent. Please enter your first message. Type 'exit' or 'quit' to quit.")
    while True:
        user_input = input("\nYou: ")
        if user_input.lower() in ["exit", "quit"]:
            break
        await Console(assistant_agent.run_stream(task=user_input))

if __name__ == "__main__":
    import asyncio
    asyncio.run(main())

Head over to its README for details, and the samples for runnable examples.

New Sample: Gitty (Experimental)

Gitty is an experimental application built to help easing the burden on open-source project maintainers. Currently, it can generate auto reply to issues.

To use:

gitty --repo microsoft/autogen issue 5212

Head over to Gitty to see details.

Improved Tracing and Logging

In this version, we made a number of improvements on tracing and logging.

  • add LLMStreamStartEvent and LLMStreamEndEvent by @EItanya in #5890
  • Allow for tracing via context provider by @EItanya in #5889
  • Fix span structure for tracing by @ekzhu in #5853
  • Add ToolCallEvent and log it from all builtin tools by @ekzhu in #5859

Powershell Support for LocalCommandLineCodeExecutor

  • feat: update local code executor to support powershell by @lspinheiro in #5884

Website Accessibility Improvements

@peterychang has made huge improvements to the accessibility of our documentation website. Thank you @peterychang!

Bug Fixes

  • fix: save_state should not require the team to be stopped. by @ekzhu in #5885
  • fix: remove max_tokens from az ai client create call when stream=True by @ekzhu in #5860
  • fix: add plugin to kernel by @lspinheiro in #5830
  • fix: warn when using reflection on tool use with Claude models by @ekzhu in #5829

Other Python Related Changes

  • doc: update termination tutorial to include FunctionCallTermination condition and fix formatting by @ekzhu in #5813
  • docs: Add note recommending PythonCodeExecutionTool as an alternative to CodeExecutorAgent by @ekzhu in #5809
  • Update quickstart.ipynb by @taswar in #5815
  • Fix warning in selector gorup chat guide by @ekzhu in #5849
  • Support for external agent runtime in AgentChat by @ekzhu in #5843
  • update ollama usage docs by @ekzhu in #5854
  • Update markitdown requirements to >= 0.0.1, while still in the 0.0.x range by @afourney in #5864
  • Add client close by @afourney in #5871
  • Update README to clarify Web Browsing Agent Team usage, and use animated Chromium browser by @ekzhu in #5861
  • Add author name before their message in Chainlit team sample by @DavidYu00 in #5878
  • Bump axios from 1.7.9 to 1.8.2 in /python/packages/autogen-studio/frontend by @dependabot in #5874
  • Add an optional base path to FileSurfer by @husseinmozannar in #5886
  • feat: Pause and Resume for AgentChat Teams and Agents by @ekzhu in #5887
  • update version to v0.4.9 by @ekzhu in #5903

New Contributors

**Full Chang...

Read more

python-v0.4.8.2

07 Mar 23:50
Compare
Choose a tag to compare

Patch Fixes

  • fix: Remove max_tokens=20 from AzureAIChatCompletionClient.create_stream's create call when stream=True #5860
  • fix: Add close() method to built-in model clients to ensure the async event loop is closed when program exits. This should fix the "ResourceWarning: unclosed transport when importing web_surfer" errors. #5871

Full Changelog: python-v0.4.8.1...python-v0.4.8.2

python-v0.4.8.1

05 Mar 17:17
Compare
Choose a tag to compare

Patch fixes to v0.4.8:

  • Fixing SKChatCompletionAdapter bug that disabled tool use #5830

Full Changelog: python-v0.4.8...python-v0.4.8.1

python-v0.4.8

04 Mar 06:47
e7b4770
Compare
Choose a tag to compare

What's New

Ollama Chat Completion Client

To use the new Ollama Client:

pip install -U "autogen-ext[ollama]"
from autogen_ext.models.ollama import OllamaChatCompletionClient
from autogen_core.models import UserMessage

ollama_client = OllamaChatCompletionClient(
    model="llama3",
)

result = await ollama_client.create([UserMessage(content="What is the capital of France?", source="user")])  # type: ignore
print(result)

To load a client from configuration:

from autogen_core.models import ChatCompletionClient

config = {
    "provider": "OllamaChatCompletionClient",
    "config": {"model": "llama3"},
}

client = ChatCompletionClient.load_component(config)

It also supports structured output:

from autogen_ext.models.ollama import OllamaChatCompletionClient
from autogen_core.models import UserMessage
from pydantic import BaseModel


class StructuredOutput(BaseModel):
    first_name: str
    last_name: str


ollama_client = OllamaChatCompletionClient(
    model="llama3",
    response_format=StructuredOutput,
)
result = await ollama_client.create([UserMessage(content="Who was the first man on the moon?", source="user")])  # type: ignore
print(result)

New Required name Field in FunctionExecutionResult

Now name field is required in FunctionExecutionResult:

exec_result = FunctionExecutionResult(call_id="...", content="...", name="...", is_error=False)
  • fix: Update SKChatCompletionAdapter message conversion by @lspinheiro in #5749

Using thought Field in CreateResult and ThoughtEvent

Now CreateResult uses the optional thought field for the extra text content generated as part of a tool call from model. It is currently supported by OpenAIChatCompletionClient.

When available, the thought content will be emitted by AssistantAgent as a ThoughtEvent message.

  • feat: Add thought process handling in tool calls and expose ThoughtEvent through stream in AgentChat by @ekzhu in #5500

New metadata Field in AgentChat Message Types

Added a metadata field for custom message content set by applications.

Exception in AgentChat Agents is now fatal

Now, if there is an exception raised within an AgentChat agent such as the AssistantAgent, instead of silently stopping the team, it will raise the exception.

New Termination Conditions

New termination conditions for better control of agents.

See how you use TextMessageTerminationCondition to control a single agent team running in a loop: https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/tutorial/teams.html#single-agent-team.

FunctionCallTermination is also discussed as an example for custom termination condition: https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/tutorial/termination.html#custom-termination-condition

  • TextMessageTerminationCondition for agentchat by @EItanya in #5742
  • FunctionCallTermination condition by @ekzhu in #5808

Docs Update

The ChainLit sample contains UserProxyAgent in a team, and shows you how to use it to get user input from UI. See: https://github.com/microsoft/autogen/tree/main/python/samples/agentchat_chainlit

  • doc & sample: Update documentation for human-in-the-loop and UserProxyAgent; Add UserProxyAgent to ChainLit sample; by @ekzhu in #5656
  • docs: Add logging instructions for AgentChat and enhance core logging guide by @ekzhu in #5655
  • doc: Enrich AssistantAgent API documentation with usage examples. by @ekzhu in #5653
  • doc: Update SelectorGroupChat doc on how to use O3-mini model. by @ekzhu in #5657
  • update human in the loop docs for agentchat by @victordibia in #5720
  • doc: update guide for termination condition and tool usage by @ekzhu in #5807
  • Add examples for custom model context in AssistantAgent and ChatCompletionContext by @ekzhu in #5810

Bug Fixes

  • Initialize BaseGroupChat before reset by @gagb in #5608
  • fix: Remove R1 model family from is_openai function by @ekzhu in #5652
  • fix: Crash in argument parsing when using Openrouter by @philippHorn in #5667
  • Fix: Add support for custom headers in HTTP tool requests by @linznin in #5660
  • fix: Structured output with tool calls for OpenAIChatCompletionClient by @ekzhu in #5671
  • fix: Allow background exceptions to be fatal by @jackgerrits in #5716
  • Fix: Auto-Convert Pydantic and Dataclass Arguments in AutoGen Tool Calls by @mjunaidca in #5737

Other Python Related Changes

N...

Read more

python-v0.4.7

17 Feb 06:41
b04775f
Compare
Choose a tag to compare

Overview

This release contains various bug fixes and feature improvements for the Python API.

Related news: our .NET API website is up and running: https://microsoft.github.io/autogen/dotnet/dev/. Our .NET Core API now has dev releases. Check it out!  

Important

Starting from v0.4.7, ModelInfo's required fields will be enforced. So please include all required fields when you use model_info when creating model clients. For example,

from autogen_core.models import UserMessage
from autogen_ext.models.openai import OpenAIChatCompletionClient

model_client = OpenAIChatCompletionClient(
    model="llama3.2:latest",
    base_url="http://localhost:11434/v1",
    api_key="placeholder",
    model_info={
        "vision": False,
        "function_calling": True,
        "json_output": False,
        "family": "unknown",
    },
)

response = await model_client.create([UserMessage(content="What is the capital of France?", source="user")])
print(response)

See ModelInfo for more details.
 

New Features

  • DockerCommandLineCodeExecutor support for additional volume mounts, exposed host ports by @andrejpk in #5383
  • Remove and get subscription APIs for Python GrpcWorkerAgentRuntime by @jackgerrits in #5365
  • Add strict mode support to BaseTool, ToolSchema and FunctionTool to allow tool calls to be used together with structured output mode by @ekzhu in #5507
  • Make CodeExecutor components serializable by @victordibia in #5527

Bug Fixes

  • fix: Address tool call execution scenario when model produces empty tool call ids by @ekzhu in #5509
  • doc & fix: Enhance AgentInstantiationContext with detailed documentation and examples for agent instantiation; Fix a but that caused value error when the expected class is not provided in register_factory by @ekzhu in #5555
  • fix: Add model info validation and improve error messaging by @ekzhu in #5556
  • fix: Add warning and doc for Windows event loop policy to avoid subprocess issues in web surfer and local executor by @ekzhu in #5557

Doc Updates

  • doc: Update API doc for MCP tool to include installation instructions by @ekzhu in #5482
  • doc: Update AgentChat quickstart guide to enhance clarity and installation instructions by @ekzhu in #5499
  • doc: API doc example for langchain database tool kit by @ekzhu in #5498
  • Update Model Client Docs to Mention API Key from Environment Variables by @victordibia in #5515
  • doc: improve tool guide in Core API doc by @ekzhu in #5546

Other Python Related Changes

  • Update website version v0.4.6 by @ekzhu in #5481
  • Reduce number of doc jobs for old releases by @jackgerrits in #5375
  • Fix class name style in document by @weijen in #5516
  • Update custom-agents.ipynb by @yosuaw in #5531
  • fix: update 0.2 deployment workflow to use tag input instead of branch by @ekzhu in #5536
  • fix: update help text for model configuration argument by @gagb in #5533
  • Update python version to v0.4.7 by @ekzhu in #5558

New Contributors

Full Changelog: python-v0.4.6...python-v0.4.7