From 9ba224b0e93d0439f3d93acd264d127aeadbc07e Mon Sep 17 00:00:00 2001 From: Reginaldo Costa Date: Fri, 14 Mar 2025 18:28:19 +0100 Subject: [PATCH 1/3] Update usage.md chore(doc): add usage examples for langgraph --- docs/usage.md | 115 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 115 insertions(+) diff --git a/docs/usage.md b/docs/usage.md index 8e407db..52f2c2b 100644 --- a/docs/usage.md +++ b/docs/usage.md @@ -68,6 +68,121 @@ One of the supported platforms for managing the model interactions is [Pydantic- This project supports specifying model interations using [LangGraph](https://langchain-ai.github.io/langgraph/). +## Use with Langgraph graph [Pydantic state] + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescriptionRequiredExample
input_fieldsan array of json paths :white_check_mark: + +```["state.fiedl1", "state.field2", "state"]``` +
output_fieldsan array of json paths :white_check_mark: + +```["state.output_fiedl1"]``` +
input_schemadefines the schema of the input data :heavy_minus_sign: + +```json +{ + "type": "object", + "properties": { + "title": {"type": "string"}, + "ingredients": {"type": "array", "items": {"type": "string"}}, + "instructions": {"type": "string"}, + }, + "required": ["title", "ingredients, instructions"], +} +``` +
+OR + +```python +from pydantic import TypeAdapter +TypeAdapter(GraphState).json_schema() +``` +
output_schemadefines the schema for the output data:heavy_minus_sign:same as input_schema
output_templatea prompt follwoing jinja template:heavy_minus_sign: + +```python +"""Return the output as a JSON object with the following structure: +{{ +"name": "Campaign Name", +"content": "Campaign Content", +"is_urgent": yes/no +}} +""" +``` +
+ +In langgraph you can used with states typed as ```python TypedDict``` or our recommended way with ```python Pydantic```. +We will show two examples of how to add the io mapper in a langgraph graph. We assume you have a langgraph graph created therefore the steps of how the graph is created is ommited. +```python +from agntcy_iomapper.langgraph import ( + io_mapper_node, +) +``` +Users can easily specify the input fields that needs to be translated and the expected output fields +```python +workflow.add_node( + "io_mapping", + io_mapper_node, + metadata={ + "input_fields": ["selected_users", "campaign_details.name"], + "output_fields": ["stats.status"], + }, +) +``` +:warning: The configurations needed by the io mapper node must be passed in the metadata dictionary when adding the node + +This instruction tells the io mapper agent to use ```selected_users ``` and ```name``` from ```campaign_details``` field the langgraph stat +```python +workflow.add_edge("create_communication", "io_mapping") +workflow.add_edge("io_mapping", "send_communication") +``` +Here is a flow chart of io mapper in a langgraph graph of the discussed application +```mermaid +flowchart TD + A[create_communication] -->|input in specific format| B(IO Mapper Agent) + B -->|output expected format| D[send_communication] +``` +:warning: Very important to set the llm instance to be used by the iomapper agent, in the runnable config with the key llm before you invoke the graph +```python +config = RunnableConfig(configurable={"llm": llm}) +app.invoke(inputs, config) +``` + + ## Use Imperative / Deterministic IO Mapper The code snippet below illustrates a fully functional deterministic mapping that From 94311a7bbd0c7490c41fa43262b6ff8f3c38affd Mon Sep 17 00:00:00 2001 From: Reginaldo Costa Date: Mon, 17 Mar 2025 15:45:50 +0100 Subject: [PATCH 2/3] Update usage.md --- docs/usage.md | 120 +++++++++++++++++++++++++++++++++++++++----------- 1 file changed, 94 insertions(+), 26 deletions(-) diff --git a/docs/usage.md b/docs/usage.md index 52f2c2b..b63eac1 100644 --- a/docs/usage.md +++ b/docs/usage.md @@ -1,5 +1,19 @@ # Usage +There are several different ways to leverage the IO Mapper functions in Python. There +is an [agentic interface](#use-agent-io-mapper) using models that can be invoked on +different AI platforms and a [imperative interface](#use-imperative--deterministic-io-mapper) +that does deterministic JSON remapping without using any AI models. + +## Key features + +The Agent IO Mapper uses an LLM/model to transform the inputs (typically output of the +first agent) to match the desired output (typically the input of a second agent). As such, +it additionally supports specifying the model prompts for the translation. The configuration +object provides a specification for the system and default user prompts: + +### Models + The Agntcy IO Mapper functions provided an easy to use package for mapping output from one agent to another. The data can be described in JSON or with natural language. The (incomplete) [pydantic](https://docs.pydantic.dev/latest/) models follow: @@ -23,18 +37,6 @@ class BaseIOMapperConfig(BaseModel): validate_json_output: bool = Field(description="Validate output against JSON schema.") ``` -There are several different ways to leverage the IO Mapper functions in Python. There -is an [agentic interface](#use-agent-io-mapper) using models that can be invoked on -different AI platforms and a [imperative interface](#use-imperative--deterministic-io-mapper) -that does deterministic JSON remapping without using any AI models. - -## Use Agent IO Mapper - -The Agent IO Mapper uses an LLM/model to transform the inputs (typically output of the -first agent) to match the desired output (typically the input of a second agent). As such, -it additionally supports specifying the model prompts for the translation. The configuration -object provides a specification for the system and default user prompts: - ```python class AgentIOMapperConfig(BaseIOMapperConfig): system_prompt_template: str = Field( @@ -54,21 +56,24 @@ class AgentIOMapperInput(BaseIOMapperInput): ) ``` +## Supported Packages Further specification of models and their arguments is left to the underlying supported packages: - [Pydantic-AI](#pydantic-ai) - [LangGraph](#langgraph) -### Pydantic-AI - +#### Pydantic-AI One of the supported platforms for managing the model interactions is [Pydantic-AI](https://ai.pydantic.dev/). -### LangGraph +#### LangGraph +This project supports specifying model interations using [LangGraph](https://langchain-ai.github.io/langgraph/). +#### LlamaIndex This project supports specifying model interations using [LangGraph](https://langchain-ai.github.io/langgraph/). -## Use with Langgraph graph [Pydantic state] +# Interface + @@ -128,31 +133,35 @@ TypeAdapter(GraphState).json_schema() - +
Field
output_templatea prompt follwoing jinja templateA prompt structured using a Jinja template. :heavy_minus_sign: ```python -"""Return the output as a JSON object with the following structure: +"""Output as JSON with this structure: {{ "name": "Campaign Name", "content": "Campaign Content", -"is_urgent": yes/no +"is_urgent": "yes/no" }} """ ```
+## You can use IO Mapper agent with LangGraph or LlamaIndex -In langgraph you can used with states typed as ```python TypedDict``` or our recommended way with ```python Pydantic```. -We will show two examples of how to add the io mapper in a langgraph graph. We assume you have a langgraph graph created therefore the steps of how the graph is created is ommited. +## LangGraph +In LangGraph, use states typed as Python TypedDict or, preferably, with Pydantic. Here are two examples of adding an IO mapper to a LangGraph graph. Note that we assume you already have a LangGraph graph created, so those steps are omitted. +### Pydantic state ```python from agntcy_iomapper.langgraph import ( io_mapper_node, ) ``` -Users can easily specify the input fields that needs to be translated and the expected output fields +You can effortlessly designate the input fields requiring mapping, as well as the desired output fields. + +The following instruction directs the IO mapper agent to utilize the ```selected_users``` and ```name``` from the ```campaign_details``` field and map them to the ```stats.status```. Here is an example to illustrate this further. ```python workflow.add_node( "io_mapping", @@ -163,9 +172,9 @@ workflow.add_node( }, ) ``` -:warning: The configurations needed by the io mapper node must be passed in the metadata dictionary when adding the node +:warning: It is crucial to ensure that the configurations required by the IO mapper node are included in the ```metadata``` dictionary when adding the LangGraph node. Failure to do so may result in improper functionality or errors. + -This instruction tells the io mapper agent to use ```selected_users ``` and ```name``` from ```campaign_details``` field the langgraph stat ```python workflow.add_edge("create_communication", "io_mapping") workflow.add_edge("io_mapping", "send_communication") @@ -181,7 +190,66 @@ flowchart TD config = RunnableConfig(configurable={"llm": llm}) app.invoke(inputs, config) ``` +## TypedDict state +This example involves a multi-agent software system designed to process a list of ingredients. It interacts with an agent specialized in recipe books to identify feasible recipes based on the provided ingredients. The information is then relayed to an IO mapper, which converts it into a format suitable for display to the user. + +Consider the following state + +```python +class GraphState(TypedDict): + query: RecipeQuery + documents: Union[List[Document], None] + recipe: Union[RecipeResponse, None] + formatted_output: Union[str, None] +``` + +This line shows how io-mapper could be added to such application +```python +graph.add_node( + "recipe_io_mapper", + io_mapper_node, + metadata={ + "input_fields": ["documents.0.page_content"], + "input_schema": TypeAdapter(GraphState).json_schema(), + "output_schema": { + "type": "object", + "properties": { + "title": {"type": "string"}, + "ingredients": {"type": "array", "items": {"type": "string"}}, + "instructions": {"type": "string"}, + }, + "required": ["title", "ingredients, instructions"], + }, + "output_fields": ["recipe"], + }, +) +``` +Finally one can add the edge. + +```python +graph.add_edge("recipe_expert", "recipe_io_mapper") +``` + +Compile and run the graph +``` +llm = get_azure() +config = RunnableConfig(configurable={"llm": llm}) +app = graph.compile() + +# Example usage +query = { + "query": {"ingredients": ["pasta", "tomato"]}, + "documents": None, + "response": None, + "formatted_output": None, +} +result = app.invoke(query, config) +``` + +## Use with LlamaIndex +### LlamaIndex Workflow +### LlamaIndex AgentWorkflow ## Use Imperative / Deterministic IO Mapper @@ -221,11 +289,11 @@ agents is omitted. data={"question": output_prof}, ) # instantiate the mapper - imerative_mapp = ImperativeIOMapper( + imperative_mapp = ImperativeIOMapper( field_mapping=mapping_object, ) # get the mapping result and send to the other agent - mapping_result = imerative_mapp.invoke(input=input) + mapping_result = imperative_mapp.invoke(input=input) ``` ### Use Examples From 89cca8cef3190fd3f586d3ef7bdc90dc5e65a200 Mon Sep 17 00:00:00 2001 From: Reginaldo Costa Date: Mon, 17 Mar 2025 17:28:28 +0100 Subject: [PATCH 3/3] chore(docs): improve usage documentation --- docs/usage.md | 214 ++++++++++++++++++++++---------------------------- 1 file changed, 95 insertions(+), 119 deletions(-) diff --git a/docs/usage.md b/docs/usage.md index b63eac1..11e3582 100644 --- a/docs/usage.md +++ b/docs/usage.md @@ -12,67 +12,22 @@ first agent) to match the desired output (typically the input of a second agent) it additionally supports specifying the model prompts for the translation. The configuration object provides a specification for the system and default user prompts: -### Models +## Example Agent IO mapping -The Agntcy IO Mapper functions provided an easy to use package for mapping output from -one agent to another. The data can be described in JSON or with natural language. The -(incomplete) [pydantic](https://docs.pydantic.dev/latest/) models follow: - -```python -class ArgumentsDescription(BaseModel): - json_schema: Schema | None = Field(description="Data format JSON schema") - description: str | None = Field(description="Data (semantic) natural language description") - -class BaseIOMapperInput(BaseModel): - input: ArgumentsDescription = Field(description="Input data descriptions") - output: ArgumentsDescription = Field(description="Output data descriptions") - data: Any = Field(description="Data to translate") - -class BaseIOMapperOutput(BaseModel): - data: Any = Field(description="Data after translation") - error: str | None = Field(description="Description of error on failure.") - -class BaseIOMapperConfig(BaseModel): - validate_json_input: bool = Field(description="Validate input against JSON schema.") - validate_json_output: bool = Field(description="Validate output against JSON schema.") -``` +#### LangGraph Example 1 +This project supports specifying model interations using [LangGraph](https://langchain-ai.github.io/langgraph/). +### Define an agent io mapper metadata ```python -class AgentIOMapperConfig(BaseIOMapperConfig): - system_prompt_template: str = Field( - description="System prompt Jinja2 template used with LLM service for translation." - ) - message_template: str = Field( - description="Default user message template. This can be overridden by the message request." - ) -``` - -and the input object supports overriding the user prompt for the requested translation: +metadata = IOMappingAgentMetadata( + input_fields=["selected_users", "campaign_details.name"], + output_fields=["stats.status"], +) -```python -class AgentIOMapperInput(BaseIOMapperInput): - message_template: str | None = Field( - description="Message (user) to send to LLM to effect translation.", - ) ``` +The abobe instruction directs the IO mapper agent to utilize the ```selected_users``` and ```name``` from the ```campaign_details``` field and map them to the ```stats.status```. Here is an example to illustrate this further. No further information is needed since the type information can be derived from the input data which is a pydantic model. -## Supported Packages -Further specification of models and their arguments is left to the underlying supported -packages: - -- [Pydantic-AI](#pydantic-ai) -- [LangGraph](#langgraph) - -#### Pydantic-AI -One of the supported platforms for managing the model interactions is [Pydantic-AI](https://ai.pydantic.dev/). - -#### LangGraph -This project supports specifying model interations using [LangGraph](https://langchain-ai.github.io/langgraph/). - -#### LlamaIndex -This project supports specifying model interations using [LangGraph](https://langchain-ai.github.io/langgraph/). - -# Interface +Bellow is a table that explain each fields of the IOMappingAgentMetadata class and how to use each @@ -132,8 +87,8 @@ TypeAdapter(GraphState).json_schema() - - + +
same as input_schema
output_templateA prompt structured using a Jinja template.output_description_promptA prompt structured using a Jinja template that can be used by the llm in the mapping definition :heavy_minus_sign: @@ -149,32 +104,72 @@ TypeAdapter(GraphState).json_schema()
-## You can use IO Mapper agent with LangGraph or LlamaIndex -## LangGraph -In LangGraph, use states typed as Python TypedDict or, preferably, with Pydantic. Here are two examples of adding an IO mapper to a LangGraph graph. Note that we assume you already have a LangGraph graph created, so those steps are omitted. -### Pydantic state +### Define an Instance of the Agent ```python -from agntcy_iomapper.langgraph import ( - io_mapper_node, +mapping_agent = IOMappingAgent(metadata=metadata, llm=llm) + ``` +Bellow is the tablex explaining the interface of the IOMappingAgent class + + + + + + + + + + + + + -The following instruction directs the IO mapper agent to utilize the ```selected_users``` and ```name``` from the ```campaign_details``` field and map them to the ```stats.status```. Here is an example to illustrate this further. + + + + + + +
FieldDescriptionRequiredExample
metadata:white_check_mark: + +```python +IOMappingAgentMetadata( + input_fields=["documents.0.page_content"], + output_fields=["recipe"], + input_schema=TypeAdapter(GraphState).json_schema(), + output_schema={ + "type": "object", + "properties": { + "title": {"type": "string"}, + "ingredients": {"type": "array", "items": {"type": "string"}}, + "instructions": {"type": "string"}, + }, + "required": ["title", "ingredients, instructions"], + }, ) ``` -You can effortlessly designate the input fields requiring mapping, as well as the desired output fields. +
llmAn instance of the large language model to be used:white_check_mark: + +```python + AzureChatOpenAI( + model=model_version, + api_version=api_version, + seed=42, + temperature=0, + ) +``` +
+ + +### Add the node to the LangGraph graph ```python workflow.add_node( "io_mapping", - io_mapper_node, - metadata={ - "input_fields": ["selected_users", "campaign_details.name"], - "output_fields": ["stats.status"], - }, + mapping_agent.langgraph_node, ) ``` -:warning: It is crucial to ensure that the configurations required by the IO mapper node are included in the ```metadata``` dictionary when adding the LangGraph node. Failure to do so may result in improper functionality or errors. - +### Finally add the edge and you can run the your LangGraph graph ```python workflow.add_edge("create_communication", "io_mapping") workflow.add_edge("io_mapping", "send_communication") @@ -185,69 +180,50 @@ flowchart TD A[create_communication] -->|input in specific format| B(IO Mapper Agent) B -->|output expected format| D[send_communication] ``` -:warning: Very important to set the llm instance to be used by the iomapper agent, in the runnable config with the key llm before you invoke the graph -```python -config = RunnableConfig(configurable={"llm": llm}) -app.invoke(inputs, config) -``` -## TypedDict state + +#### LangGraph Example 2 This example involves a multi-agent software system designed to process a list of ingredients. It interacts with an agent specialized in recipe books to identify feasible recipes based on the provided ingredients. The information is then relayed to an IO mapper, which converts it into a format suitable for display to the user. -Consider the following state +### Define an agent io mapper metadata ```python -class GraphState(TypedDict): - query: RecipeQuery - documents: Union[List[Document], None] - recipe: Union[RecipeResponse, None] - formatted_output: Union[str, None] +metadata = IOMappingAgentMetadata( + input_fields=["documents.0.page_content"], + output_fields=["recipe"], + input_schema=TypeAdapter(GraphState).json_schema(), + output_schema={ + "type": "object", + "properties": { + "title": {"type": "string"}, + "ingredients": {"type": "array", "items": {"type": "string"}}, + "instructions": {"type": "string"}, + }, + "required": ["title", "ingredients, instructions"], + }, +) +``` + +### Define an Instance of the Agent +```python +mapping_agent = IOMappingAgent(metadata=metadata, llm=llm) ``` -This line shows how io-mapper could be added to such application +### Add the node to the LangGraph graph ```python graph.add_node( "recipe_io_mapper", - io_mapper_node, - metadata={ - "input_fields": ["documents.0.page_content"], - "input_schema": TypeAdapter(GraphState).json_schema(), - "output_schema": { - "type": "object", - "properties": { - "title": {"type": "string"}, - "ingredients": {"type": "array", "items": {"type": "string"}}, - "instructions": {"type": "string"}, - }, - "required": ["title", "ingredients, instructions"], - }, - "output_fields": ["recipe"], - }, + mapping_agent.langgraph_node, ) ``` -Finally one can add the edge. -```python +### Finally add the edge and you can run the your LangGraph graph +``` graph.add_edge("recipe_expert", "recipe_io_mapper") ``` -Compile and run the graph -``` -llm = get_azure() -config = RunnableConfig(configurable={"llm": llm}) -app = graph.compile() - -# Example usage -query = { - "query": {"ingredients": ["pasta", "tomato"]}, - "documents": None, - "response": None, - "formatted_output": None, -} -result = app.invoke(query, config) -``` +#### LlamaIndex +This project supports specifying model interations using [LangGraph](https://langchain-ai.github.io/langgraph/). -## Use with LlamaIndex -### LlamaIndex Workflow ### LlamaIndex AgentWorkflow