Agents: Local code execution. #5676
Replies: 1 comment
-
Have you read the documentation of the v0.4 version? https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/index.html For group chat: https://microsoft.github.io/autogen/dev/user-guide/agentchat-user-guide/selector-group-chat.html For code executor agent: https://microsoft.github.io/autogen/dev/reference/python/autogen_agentchat.agents.html#autogen_agentchat.agents.CodeExecutorAgent Also take a look at the PythonCodeExecutionTool: https://microsoft.github.io/autogen/stable/reference/python/autogen_ext.tools.code_execution.html#autogen_ext.tools.code_execution.PythonCodeExecutionTool. In case you want agents to execute their own code without sending code as a message to a group chat. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I am using autogen-agent 0.4.2 version, I have a requirement of executing a python function locally which uses files in local folder to execute and respond with details. I was able to do so using earlier pyautogen packages but not sure how to get it done with autogen-agent chat.
Code:
import json
from autogen import AssistantAgent, UserProxyAgent, GroupChat, GroupChatManager
from openai import OpenAI
import httpx
import numpy as np
llm_config = {"config_list": [{"model": "xxxxxxxxxxxxx", "api_key": "xxxxxxxxxxxxxxx","base_url":"xxxxxxxxxxxxxxxxx}],
}
Actually it goes into a locally stored sqlite db file and searches and pulls some snippets out. Basically some python code gets executed locally in background to pull some snippets. It is not a simple return.
def retrieve_relevant_info(query_user):
print(query_user)
context = "some code snippets"
return context
task = "Write me a unit test case for MongoDB."
user_proxy = UserProxyAgent(
name="user_proxy",
system_message="Never call context_creator twice in a row.",
human_input_mode="ALWAYS",
max_consecutive_auto_reply=1,
#code_execution_config={"./":"coding", "use_docker":False},
code_execution_config = False
)
executor = UserProxyAgent(
name="executor",
system_message="Never call context_creator twice in a row.",
human_input_mode="NEVER",
max_consecutive_auto_reply=1,
code_execution_config={"./":"coding", "use_docker":False},
)
context_creator = AssistantAgent(
name="context_creator",
system_message="For coding tasks, only use the functions you have been provided with. Reply TERMINATE when the task is done.",
llm_config=llm_config,
max_consecutive_auto_reply = 1,
description=(
"Unit test writer. "
"You are expert in writing unit test cases for the functions provided as the inputs. "
"1. Please go through each code snippet provided to you by the previous agent in chat. "
"2. Identify the finer details like connection parameters, URLs, environment files, etc., that can be crucial to write test functions. "
"3. Identify which files should be imported that can be referenced in unit cases, which can also increase code coverage. "
"4. If needed, create dummy test data and output as well. "
"5. Cover both success and failure scenarios. "
"6. Identify the best framework available for the scenario and the programming language."
),
)
from pydantic import BaseModel
from typing import List
from typing_extensions import Annotated
Define a Pydantic model for the context response
class ContextResponse(BaseModel):
context: str
@executor.register_for_execution()
@context_creator.register_for_llm(description="Code context creator using Indes and Sqlite DB")
def Contex_Create(
query_user: Annotated[str, "User query"]
) -> ContextResponse:
context = retrieve_relevant_info(query_user)
return ContextResponse(context=context)
Instantiate unit test writer agent
unittest_writer = AssistantAgent(
name="unittest_writer",
system_message="Use the output given by context_creator to return the relevant result.",
llm_config=llm_config1, # Ensure this is the appropriate configuration
description=(
"Unit test writer. "
"You are expert in writing unit test cases for the functions provided as the inputs. "
"1. Please go through each code snippet provided to you by the previous agent in chat. "
"2. Identify the finer details like connection parameters, URLs, environment files, that can be crucial to write test functions. "
"3. Identify which files should be imported that can be referenced in unit cases, which can also increase code coverage. "
"4. If needed, create dummy test data and output as well. "
"5. Cover both success and failure scenarios. "
"6. Identify the best framework available for the scenario and the programming language."
),
)
transition_rules = {
user_proxy: [context_creator],
context_creator: [executor],
executor: [unittest_writer],
unittest_writer: [user_proxy]
}
customer speaker selection
from typing import Dict, List
from autogen import Agent
Set up the GroupChat
groupchat = GroupChat(
agents=[user_proxy, context_creator, unittest_writer, executor],
messages=[],
max_round=6,
allowed_or_disallowed_speaker_transitions=transition_rules,
speaker_transitions_type="allowed",
#speaker_selection_method=custom_speaker_selection_func,
)
Set up the GroupChatManager
manager = GroupChatManager(
groupchat=groupchat,
llm_config=llm_config1,
code_execution_config=False,
is_termination_msg=lambda x: x.get("content", "") and x.get("content", "").rstrip().endswith("TERMINATE"),
)
groupchat_result = user_proxy.initiate_chat(
recipient=manager,
message=task,
)
output:
user_proxy (to chat_manager):
Write me a unit test case for MongoDB.
Next speaker: context_creator
context_creator (to chat_manager):
***** Suggested tool call (chatcmpl-tool-8fa4d5bc87144ac08236e20ae38949f0): Contex_Create *****
Arguments:
{"query_user": "Write a unit test case for MongoDB in Python using pytest"}
Next speaker: executor
***** Response from calling tool (chatcmpl-tool-8fa4d5bc87144ac08236e20ae38949f0) *****
{"context":"def init(\n self,\n host: str,\n host1: str,\n host2: str,\n port: int,\n username: str,\n password: str,\n replica_set: Optional[str] = None,\n ):\n #self._mongo_client = AsyncIOMotorClient(\n # host=host,\n # host1= host1,\n # host2= host2,\n # port=port,\n # username=username,\n # password=password,\n # replicaSet=replica_set,\n #)\n\n logger.info(\n 'MongoDB Service |'\n f' host: {host}'\n f' host1: {host1}'\n f' host2: {host2}'\n f' port: {port}'\n f' replicaSet: {replica_set}'\n )\n self._mongo_client = AsyncIOMotorClient('mongodb://' + username + ':' + password + '@' + host + ':' + str(port) + ','+host1 + ':' + str(port) + ','+ host2 + ':' + str(port) + '/'+"?replicaSet="+replica_set) \n self._db = self._mongo_client[Environ.MONGODB_DATABASE.get()]\n self._collection = self._db[Environ.MONGODB_COLLECTION.get()]\n \n logger.info("Database: ")\n logger.info(self._db)\n logger.info("Collection: ")\n logger.info(self._collection)\nfrom enum import Enum\nimport os\nfrom typing import Optional\n\nclass Environ(str, Enum):\n MONGODB_COLLECTION = 'MONGODB_COLLECTION'\n MONGODB_DATABASE = 'MONGODB_DATABASE'\n MONGODB_HOST = 'MONGODB_HOST'\n MONGODB_HOST1 = 'MONGODB_HOST1'\n MONGODB_HOST2 = 'MONGODB_HOST2'\n MONGODB_PASSWORD = 'MONGODB_PASSWORD'\n MONGODB_PORT = 'MONGODB_PORT'\n MONGODB_REPLICA_SET = 'MONGODB_REPLICA_SET'\n MONGODB_USER = 'MONGODB_USER'\n MODEL_API_URL = 'MODEL_API_URL'\n"}
Next speaker: unittest_writer
unittest_writer (to chat_manager):
This test case checks if the MongoDBLogger class is initialized correctly and if the log_prediction method is working as expected. It uses the unittest.mock library to mock the logger and the collection methods.
To run the test, you need to replace 'your_module' with the actual module name where the MongoDBLogger class is defined.
Note: Make sure to install the required libraries by running
pip install -r requirements.txt
in your project directory.Also, make sure to replace the host, port, username, password, and replica_set in the test case with the actual values for your MongoDB instance.
Finally, make sure to update the
your_module
in the import statement with the actual module name where the MongoDBLogger class is defined.You can also use a test framework like Pytest or Behave to write and run the tests.
Next speaker: user_proxy
Replying as user_proxy. Provide feedback to chat_manager. Press enter to skip and use auto-reply, or type 'exit' to end the conversation: exit
Beta Was this translation helpful? Give feedback.
All reactions