forked from microsoft/autogen
-
Notifications
You must be signed in to change notification settings - Fork 0
latest autogen #19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
latest autogen #19
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
<!-- Thank you for your contribution! Please review https://microsoft.github.io/autogen/docs/Contribute before opening a pull request. --> <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. --> ## Why are these changes needed? This is an initial exploration of what could be a solution for microsoft#6214 . It implements a simple text canvas using difflib and also a memory component and a tool component for interacting with the canvas. Still in early testing but would love feedback on the design. ## Related issue number <!-- For example: "Closes microsoft#1234" --> ## Checks - [ ] I've included any doc changes needed for <https://microsoft.github.io/autogen/>. See <https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to build and test documentation locally. - [ ] I've added tests (if relevant) corresponding to the changes introduced in this PR. - [ ] I've made sure all auto checks have passed. --------- Co-authored-by: Leonardo Pinheiro <[email protected]> Co-authored-by: Eric Zhu <[email protected]>
…system_messages”] instead of `startswith("gemini-")` (microsoft#6345) The current implementation of consecutive `SystemMessage` merging applies only to models where `model_info.family` starts with `"gemini-"`. Since PR microsoft#6327 introduced the `multiple_system_messages` field in `model_info`, we can now generalize this logic by checking whether the field is explicitly set to `False`. This change replaces the hardcoded family check with a conditional that merges consecutive `SystemMessage` blocks whenever `multiple_system_messages` is set to `False`. Test cases that previously depended on the `"gemini"` model family have been updated to reflect this configuration flag, and renamed accordingly for clarity. In addition, for consistency across conditional logic, a follow-up PR is planned to refactor the Claude-specific transformation condition (currently implemented via `create_args.get("model", "unknown").startswith("claude-")`) to instead use the existing `is_claude()`. Co-authored-by: Eric Zhu <[email protected]>
…crosoft#6344) ## Why are these changes needed? I was getting the following exception when doing tool calls with anthropic - the exception was coming form the `__str__` in `LLMStreamStartEvent`. ``` ('Object of type ToolUseBlock is not JSON serializable',) ``` The issue is that when creating the LLMStreamStartevent in the `create_stream`, the messages weren't being serialized first. ## Related issue number Signed-off-by: Peter Jausovec <[email protected]> Co-authored-by: Eric Zhu <[email protected]>
…ft#6338) DOC: add extentions - autogen-oaiapi and autogen-contextplus the contextplus is user define autogen model_context. It discussion in microsoft#6217 and microsoft#6160 --------- Co-authored-by: Eric Zhu <[email protected]>
…soft#6286) ## Why are these changes needed? This PR updates `SelectorGroupChat` to support streaming mode for `select_speaker`. It introduces a `streaming` argument — when set to `True`, `select_speaker` will use `create_streaming()` instead of `create()`. ## Additional context Some models (e.g., QwQ) only work properly in streaming mode. To support them, the prompt selection step in `SelectorGroupChat` must also run with `streaming=True`. ## Related issue number Closes microsoft#6145 ## Checks - [x] I've included any doc changes needed for <https://microsoft.github.io/autogen/>. See <https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to build and test documentation locally. - [x] I've added tests (if relevant) corresponding to the changes introduced in this PR. - [x] I've made sure all auto checks have passed. --------- Co-authored-by: Eric Zhu <[email protected]>
…ponses (microsoft#6335) ## Why are these changes needed? This PR adds an example demonstrates how to build a streaming chat API with multi-turn conversation history using `autogen-core` and FastAPI. ## Related issue number ## Checks - [x] I've included any doc changes needed for <https://microsoft.github.io/autogen/>. See <https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to build and test documentation locally. - [x] I've added tests (if relevant) corresponding to the changes introduced in this PR. - [x] I've made sure all auto checks have passed. --------- Co-authored-by: Eric Zhu <[email protected]>
## Why are these changes needed? `convert_tools` failed if Optional args were used in tools (the `type` field doesn't exist in that case and `anyOf` must be used). This uses the `anyOf` field to pick the first non-null type to use. ## Related issue number Fixes microsoft#6323 ## Checks - [ ] I've included any doc changes needed for <https://microsoft.github.io/autogen/>. See <https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to build and test documentation locally. - [x] I've added tests (if relevant) corresponding to the changes introduced in this PR. - [x] I've made sure all auto checks have passed. --------- Signed-off-by: Peter Jausovec <[email protected]> Co-authored-by: Eric Zhu <[email protected]>
The DockerCommandLineCodeExecutor doesn't currently offer GPU support. By simply using DeviceRequest from the docker python API, these changes expose GPUs to the docker container and provide the ability to execute CUDA-accelerated code within autogen. ## Related issue number Closes: microsoft#6302 ## Checks - [x] I've included any doc changes needed for <https://microsoft.github.io/autogen/>. See <https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to build and test documentation locally. - [x] I've added tests (if relevant) corresponding to the changes introduced in this PR. - [x] I've made sure all auto checks have passed. --------- Co-authored-by: Eric Zhu <[email protected]>
This change avoid re-registering a structured message already registered to the team by a previous agent also included in the team. This issue occurs when agents share Pydantic models as output format ## Related issue number Closes microsoft#6353 --------- Co-authored-by: Eric Zhu <[email protected]>
- Added the support Azure AI Agent. The new agent is named AzureAIAgent. - The agent supports Bing search, file search, and Azure search tools. - Added a Jupiter notebook to demonstrate the usage of the AzureAIAgent. ## What's missing? - AzureAIAgent support only text message responses - Parallel execution for the custom functions. ## Related issue number [5545](microsoft#5545 (comment)) --------- Co-authored-by: Eric Zhu <[email protected]>
…nt and CodeExecutorAgent (microsoft#6337) This PR fixes a bug where `model_context` was either ignored or explicitly set to `None` during agent deserialization (`_from_config`) in: - `AssistantAgent`: `model_context` was serialized but not restored. - `SocietyOfMindAgent`: `model_context` was neither serialized nor restored. - `CodeExecutorAgent`: `model_context` was serialized but not restored. As a result, restoring an agent from its config silently dropped runtime context settings, potentially affecting agent behavior. This patch: - Adds proper serialization/deserialization of `model_context` using `.dump_component()` and `load_component(...)`. - Ensures round-trip consistency when using declarative agent configs. ## Related issue number Closes microsoft#6336 ## Checks - [ ] I've included any doc changes needed for <https://microsoft.github.io/autogen/>. See <https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to build and test documentation locally. - [x] I've added tests (if relevant) corresponding to the changes introduced in this PR. - [x] I've made sure all auto checks have passed. --------- Co-authored-by: Eric Zhu <[email protected]>
## Why are these changes needed? This PR introduces a baseline self-debugging loop to the `CodeExecutionAgent`. The loop automatically retries code generation and execution up to a configurable number of attempts (n) until the execution succeeds or the retry limit is reached. This enables the agent to recover from transient failures (e.g., syntax errors, runtime errors) by using its own reasoning to iteratively improve generated code—laying the foundation for more robust autonomous behavior. ## Related issue number Closes microsoft#6207 ## Checks - [x] I've included any doc changes needed for <https://microsoft.github.io/autogen/>. See <https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to build and test documentation locally. - [x] I've added tests (if relevant) corresponding to the changes introduced in this PR. - [x] I've made sure all auto checks have passed. --------- Signed-off-by: Abhijeetsingh Meena <[email protected]> Co-authored-by: Eric Zhu <[email protected]>
## Why are these changes needed? | Package | Test time-Origin (Sec) | Test time-Edited (Sec) | |-------------------------|------------------|-----------------------------------------------| | autogen-studio | 1.64 | 1.64 | | autogen-core | 6.03 | 6.17 | | autogen-ext | 387.15 | 373.40 | | autogen-agentchat | 54.20 | 20.67 | ## Related issue number Related microsoft#6361 ## Checks - [ ] I've included any doc changes needed for <https://microsoft.github.io/autogen/>. See <https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to build and test documentation locally. - [ ] I've added tests (if relevant) corresponding to the changes introduced in this PR. - [ ] I've made sure all auto checks have passed.
This PR introduces `WorkBench`. A workbench provides a group of tools that share the same resource and state. For example, `McpWorkbench` provides the underlying tools on the MCP server. A workbench allows tools to be managed together and abstract away the lifecycle of individual tools under a single entity. This makes it possible to create agents with stateful tools from serializable configuration (component configs), and it also supports dynamic tools: tools change after each execution. Here is how a workbench may be used with AssistantAgent (not included in this PR): ```python workbench = McpWorkbench(server_params) agent = AssistantAgent("assistant", tools=workbench) result = await agent.run(task="do task...") ``` TODOs: 1. In a subsequent PR, update `AssistantAgent` to use workbench as an alternative in the `tools` parameter. Use `StaticWorkbench` to manage individual tools. 2. In another PR, add documentation on workbench. --------- Co-authored-by: EeS <[email protected]> Co-authored-by: Minh Đăng <[email protected]>
…e0424 * upstream/main: Remove `name` field from OpenAI Assistant Message (microsoft#6388) Introduce workbench (microsoft#6340) TEST/change gpt4, gpt4o serise to gpt4.1nano (microsoft#6375) update website version (microsoft#6364) Add self-debugging loop to `CodeExecutionAgent` (microsoft#6306) Fix: deserialize model_context in AssistantAgent and SocietyOfMindAgent and CodeExecutorAgent (microsoft#6337) Add azure ai agent (microsoft#6191) Avoid re-registering a message type already registered (microsoft#6354) Added support for exposing GPUs to docker code executor (microsoft#6339) fix: ollama fails when tools use optional args (microsoft#6343) Add an example using autogen-core and FastAPI to create streaming responses (microsoft#6335) FEAT: SelectorGroupChat could using stream inner select_prompt (microsoft#6286) Add experimental notice to canvas (microsoft#6349) DOC: add extentions - autogen-oaiapi and autogen-contextplus (microsoft#6338) fix: ensure serialized messages are passed to LLMStreamStartEvent (microsoft#6344) Generalize Continuous SystemMessage merging via model_info[“multiple_system_messages”] instead of `startswith("gemini-")` (microsoft#6345) Agentchat canvas (microsoft#6215) Signed-off-by: Peter Jausovec <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Bringing in latest autogen