diff --git a/docs/usage.md b/docs/usage.md index 8e407db..11e3582 100644 --- a/docs/usage.md +++ b/docs/usage.md @@ -1,73 +1,232 @@ # Usage -The Agntcy IO Mapper functions provided an easy to use package for mapping output from -one agent to another. The data can be described in JSON or with natural language. The -(incomplete) [pydantic](https://docs.pydantic.dev/latest/) models follow: - -```python -class ArgumentsDescription(BaseModel): - json_schema: Schema | None = Field(description="Data format JSON schema") - description: str | None = Field(description="Data (semantic) natural language description") - -class BaseIOMapperInput(BaseModel): - input: ArgumentsDescription = Field(description="Input data descriptions") - output: ArgumentsDescription = Field(description="Output data descriptions") - data: Any = Field(description="Data to translate") - -class BaseIOMapperOutput(BaseModel): - data: Any = Field(description="Data after translation") - error: str | None = Field(description="Description of error on failure.") - -class BaseIOMapperConfig(BaseModel): - validate_json_input: bool = Field(description="Validate input against JSON schema.") - validate_json_output: bool = Field(description="Validate output against JSON schema.") -``` - There are several different ways to leverage the IO Mapper functions in Python. There is an [agentic interface](#use-agent-io-mapper) using models that can be invoked on different AI platforms and a [imperative interface](#use-imperative--deterministic-io-mapper) that does deterministic JSON remapping without using any AI models. -## Use Agent IO Mapper +## Key features The Agent IO Mapper uses an LLM/model to transform the inputs (typically output of the first agent) to match the desired output (typically the input of a second agent). As such, it additionally supports specifying the model prompts for the translation. The configuration object provides a specification for the system and default user prompts: +## Example Agent IO mapping + +#### LangGraph Example 1 +This project supports specifying model interations using [LangGraph](https://langchain-ai.github.io/langgraph/). + +### Define an agent io mapper metadata ```python -class AgentIOMapperConfig(BaseIOMapperConfig): - system_prompt_template: str = Field( - description="System prompt Jinja2 template used with LLM service for translation." - ) - message_template: str = Field( - description="Default user message template. This can be overridden by the message request." - ) +metadata = IOMappingAgentMetadata( + input_fields=["selected_users", "campaign_details.name"], + output_fields=["stats.status"], +) + ``` +The abobe instruction directs the IO mapper agent to utilize the ```selected_users``` and ```name``` from the ```campaign_details``` field and map them to the ```stats.status```. Here is an example to illustrate this further. No further information is needed since the type information can be derived from the input data which is a pydantic model. -and the input object supports overriding the user prompt for the requested translation: +Bellow is a table that explain each fields of the IOMappingAgentMetadata class and how to use each +
Field | +Description | +Required | +Example | +
---|---|---|---|
input_fields | +an array of json paths | +:white_check_mark: | ++ +```["state.fiedl1", "state.field2", "state"]``` + | +
output_fields | +an array of json paths | +:white_check_mark: | ++ +```["state.output_fiedl1"]``` + | +
input_schema | +defines the schema of the input data | +:heavy_minus_sign: | +
+
+```json
+{
+ "type": "object",
+ "properties": {
+ "title": {"type": "string"},
+ "ingredients": {"type": "array", "items": {"type": "string"}},
+ "instructions": {"type": "string"},
+ },
+ "required": ["title", "ingredients, instructions"],
+}
+```
+ +OR + +```python +from pydantic import TypeAdapter +TypeAdapter(GraphState).json_schema() +``` + |
+
output_schema | +defines the schema for the output data | +:heavy_minus_sign: | +same as input_schema | +
output_description_prompt | +A prompt structured using a Jinja template that can be used by the llm in the mapping definition | +:heavy_minus_sign: | ++ ```python -class AgentIOMapperInput(BaseIOMapperInput): - message_template: str | None = Field( - description="Message (user) to send to LLM to effect translation.", - ) +"""Output as JSON with this structure: +{{ +"name": "Campaign Name", +"content": "Campaign Content", +"is_urgent": "yes/no" +}} +""" ``` + | +
Field | +Description | +Required | +Example | +
---|---|---|---|
metadata | ++ | :white_check_mark: | ++ +```python +IOMappingAgentMetadata( + input_fields=["documents.0.page_content"], + output_fields=["recipe"], + input_schema=TypeAdapter(GraphState).json_schema(), + output_schema={ + "type": "object", + "properties": { + "title": {"type": "string"}, + "ingredients": {"type": "array", "items": {"type": "string"}}, + "instructions": {"type": "string"}, + }, + "required": ["title", "ingredients, instructions"], + }, +) +``` + | +
llm | +An instance of the large language model to be used | +:white_check_mark: | ++ +```python + AzureChatOpenAI( + model=model_version, + api_version=api_version, + seed=42, + temperature=0, + ) +``` + | +