Skip to content

Commit

Permalink
Merge branch 'master' into box/citations
Browse files Browse the repository at this point in the history
  • Loading branch information
efriis authored Oct 4, 2024
2 parents e751a4e + 0495b7f commit 592745e
Show file tree
Hide file tree
Showing 153 changed files with 4,161 additions and 11,745 deletions.
1 change: 0 additions & 1 deletion .github/workflows/_release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -294,7 +294,6 @@ jobs:
VOYAGE_API_KEY: ${{ secrets.VOYAGE_API_KEY }}
UPSTAGE_API_KEY: ${{ secrets.UPSTAGE_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
UNSTRUCTURED_API_KEY: ${{ secrets.UNSTRUCTURED_API_KEY }}
run: make integration_tests
working-directory: ${{ inputs.working-directory }}

Expand Down
16 changes: 14 additions & 2 deletions .github/workflows/api_doc_build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,14 @@ jobs:
with:
repository: langchain-ai/langchain-experimental
path: langchain-experimental
- uses: actions/checkout@v4
with:
repository: langchain-ai/langchain-milvus
path: langchain-milvus
- uses: actions/checkout@v4
with:
repository: langchain-ai/langchain-unstructured
path: langchain-unstructured


- name: Set Git config
Expand All @@ -73,7 +81,7 @@ jobs:
git config --local user.email "[email protected]"
git config --local user.name "Github Actions"
- name: Move google libs
- name: Move libs
run: |
rm -rf \
langchain/libs/partners/google-genai \
Expand All @@ -87,7 +95,9 @@ jobs:
langchain/libs/partners/ai21 \
langchain/libs/partners/together \
langchain/libs/standard-tests \
langchain/libs/experimental
langchain/libs/experimental \
langchain/libs/partners/milvus \
langchain/libs/partners/unstructured
mv langchain-google/libs/genai langchain/libs/partners/google-genai
mv langchain-google/libs/vertexai langchain/libs/partners/google-vertexai
mv langchain-google/libs/community langchain/libs/partners/google-community
Expand All @@ -101,6 +111,8 @@ jobs:
mv langchain-ai21/libs/ai21 langchain/libs/partners/ai21
mv langchain-together/libs/together langchain/libs/partners/together
mv langchain-experimental/libs/experimental langchain/libs/experimental
mv langchain-milvus/libs/milvus langchain/libs/partners/milvus
mv langchain-unstructured/libs/unstructured langchain/libs/partners/unstructured
- name: Rm old html
run:
Expand Down
6 changes: 3 additions & 3 deletions docs/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -82,9 +82,9 @@ vercel-build: install-vercel-deps build generate-references
mv $(OUTPUT_NEW_DOCS_DIR) docs
rm -rf build
mkdir static/api_reference
git clone --depth=1 https://github.com/baskaryan/langchain-api-docs-build.git
mv langchain-api-docs-build/api_reference_build/html/* static/api_reference/
rm -rf langchain-api-docs-build
git clone --depth=1 https://github.com/langchain-ai/langchain-api-docs-html.git
mv langchain-api-docs-html/api_reference_build/html/* static/api_reference/
rm -rf langchain-api-docs-html
NODE_OPTIONS="--max-old-space-size=5000" yarn run docusaurus build

start:
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/contributing/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@ sidebar_position: 0
---
# Welcome Contributors

Hi there! Thank you for even being interested in contributing to LangChain.
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether they involve new features, improved infrastructure, better documentation, or bug fixes.
Hi there! Thank you for your interest in contributing to LangChain.
As an open-source project in a fast developing field, we are extremely open to contributions, whether they involve new features, improved infrastructure, better documentation, or bug fixes.

## 🗺️ Guidelines

Expand Down
88 changes: 56 additions & 32 deletions docs/docs/how_to/migrate_agent.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,12 @@
"LangChain agents (the [AgentExecutor](https://python.langchain.com/api_reference/langchain/agents/langchain.agents.agent.AgentExecutor.html#langchain.agents.agent.AgentExecutor) in particular) have multiple configuration parameters.\n",
"In this notebook we will show how those parameters map to the LangGraph react agent executor using the [create_react_agent](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent) prebuilt helper method.\n",
"\n",
"\n",
":::note\n",
"In LangGraph, the graph replaces LangChain's agent executor. It manages the agent's cycles and tracks the scratchpad as messages within its state. The LangChain \"agent\" corresponds to the state_modifier and LLM you've provided.\n",
":::\n",
"\n",
"\n",
"#### Prerequisites\n",
"\n",
"This how-to guide uses OpenAI as the LLM. Install the dependencies to run."
Expand Down Expand Up @@ -183,10 +189,10 @@
"source": [
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"app = create_react_agent(model, tools)\n",
"langgraph_agent_executor = create_react_agent(model, tools)\n",
"\n",
"\n",
"messages = app.invoke({\"messages\": [(\"human\", query)]})\n",
"messages = langgraph_agent_executor.invoke({\"messages\": [(\"human\", query)]})\n",
"{\n",
" \"input\": query,\n",
" \"output\": messages[\"messages\"][-1].content,\n",
Expand Down Expand Up @@ -216,7 +222,9 @@
"\n",
"new_query = \"Pardon?\"\n",
"\n",
"messages = app.invoke({\"messages\": message_history + [(\"human\", new_query)]})\n",
"messages = langgraph_agent_executor.invoke(\n",
" {\"messages\": message_history + [(\"human\", new_query)]}\n",
")\n",
"{\n",
" \"input\": new_query,\n",
" \"output\": messages[\"messages\"][-1].content,\n",
Expand Down Expand Up @@ -309,10 +317,12 @@
"# This could also be a SystemMessage object\n",
"# system_message = SystemMessage(content=\"You are a helpful assistant. Respond only in Spanish.\")\n",
"\n",
"app = create_react_agent(model, tools, state_modifier=system_message)\n",
"langgraph_agent_executor = create_react_agent(\n",
" model, tools, state_modifier=system_message\n",
")\n",
"\n",
"\n",
"messages = app.invoke({\"messages\": [(\"user\", query)]})"
"messages = langgraph_agent_executor.invoke({\"messages\": [(\"user\", query)]})"
]
},
{
Expand Down Expand Up @@ -356,10 +366,12 @@
" ]\n",
"\n",
"\n",
"app = create_react_agent(model, tools, state_modifier=_modify_state_messages)\n",
"langgraph_agent_executor = create_react_agent(\n",
" model, tools, state_modifier=_modify_state_messages\n",
")\n",
"\n",
"\n",
"messages = app.invoke({\"messages\": [(\"human\", query)]})\n",
"messages = langgraph_agent_executor.invoke({\"messages\": [(\"human\", query)]})\n",
"print(\n",
" {\n",
" \"input\": query,\n",
Expand Down Expand Up @@ -503,13 +515,13 @@
"# system_message = SystemMessage(content=\"You are a helpful assistant. Respond only in Spanish.\")\n",
"\n",
"memory = MemorySaver()\n",
"app = create_react_agent(\n",
"langgraph_agent_executor = create_react_agent(\n",
" model, tools, state_modifier=system_message, checkpointer=memory\n",
")\n",
"\n",
"config = {\"configurable\": {\"thread_id\": \"test-thread\"}}\n",
"print(\n",
" app.invoke(\n",
" langgraph_agent_executor.invoke(\n",
" {\n",
" \"messages\": [\n",
" (\"user\", \"Hi, I'm polly! What's the output of magic_function of 3?\")\n",
Expand All @@ -520,15 +532,15 @@
")\n",
"print(\"---\")\n",
"print(\n",
" app.invoke({\"messages\": [(\"user\", \"Remember my name?\")]}, config)[\"messages\"][\n",
" -1\n",
" ].content\n",
" langgraph_agent_executor.invoke(\n",
" {\"messages\": [(\"user\", \"Remember my name?\")]}, config\n",
" )[\"messages\"][-1].content\n",
")\n",
"print(\"---\")\n",
"print(\n",
" app.invoke({\"messages\": [(\"user\", \"what was that output again?\")]}, config)[\n",
" \"messages\"\n",
" ][-1].content\n",
" langgraph_agent_executor.invoke(\n",
" {\"messages\": [(\"user\", \"what was that output again?\")]}, config\n",
" )[\"messages\"][-1].content\n",
")"
]
},
Expand Down Expand Up @@ -636,9 +648,13 @@
" return prompt.invoke({\"messages\": state[\"messages\"]}).to_messages()\n",
"\n",
"\n",
"app = create_react_agent(model, tools, state_modifier=_modify_state_messages)\n",
"langgraph_agent_executor = create_react_agent(\n",
" model, tools, state_modifier=_modify_state_messages\n",
")\n",
"\n",
"for step in app.stream({\"messages\": [(\"human\", query)]}, stream_mode=\"updates\"):\n",
"for step in langgraph_agent_executor.stream(\n",
" {\"messages\": [(\"human\", query)]}, stream_mode=\"updates\"\n",
"):\n",
" print(step)"
]
},
Expand Down Expand Up @@ -707,9 +723,9 @@
"source": [
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"app = create_react_agent(model, tools=tools)\n",
"langgraph_agent_executor = create_react_agent(model, tools=tools)\n",
"\n",
"messages = app.invoke({\"messages\": [(\"human\", query)]})\n",
"messages = langgraph_agent_executor.invoke({\"messages\": [(\"human\", query)]})\n",
"\n",
"messages"
]
Expand Down Expand Up @@ -839,10 +855,10 @@
"\n",
"RECURSION_LIMIT = 2 * 3 + 1\n",
"\n",
"app = create_react_agent(model, tools=tools)\n",
"langgraph_agent_executor = create_react_agent(model, tools=tools)\n",
"\n",
"try:\n",
" for chunk in app.stream(\n",
" for chunk in langgraph_agent_executor.stream(\n",
" {\"messages\": [(\"human\", query)]},\n",
" {\"recursion_limit\": RECURSION_LIMIT},\n",
" stream_mode=\"values\",\n",
Expand Down Expand Up @@ -953,12 +969,12 @@
"source": [
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"app = create_react_agent(model, tools=tools)\n",
"langgraph_agent_executor = create_react_agent(model, tools=tools)\n",
"# Set the max timeout for each step here\n",
"app.step_timeout = 2\n",
"langgraph_agent_executor.step_timeout = 2\n",
"\n",
"try:\n",
" for chunk in app.stream({\"messages\": [(\"human\", query)]}):\n",
" for chunk in langgraph_agent_executor.stream({\"messages\": [(\"human\", query)]}):\n",
" print(chunk)\n",
" print(\"------\")\n",
"except TimeoutError:\n",
Expand Down Expand Up @@ -994,17 +1010,21 @@
"\n",
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"app = create_react_agent(model, tools=tools)\n",
"langgraph_agent_executor = create_react_agent(model, tools=tools)\n",
"\n",
"\n",
"async def stream(app, inputs):\n",
" async for chunk in app.astream({\"messages\": [(\"human\", query)]}):\n",
"async def stream(langgraph_agent_executor, inputs):\n",
" async for chunk in langgraph_agent_executor.astream(\n",
" {\"messages\": [(\"human\", query)]}\n",
" ):\n",
" print(chunk)\n",
" print(\"------\")\n",
"\n",
"\n",
"try:\n",
" task = asyncio.create_task(stream(app, {\"messages\": [(\"human\", query)]}))\n",
" task = asyncio.create_task(\n",
" stream(langgraph_agent_executor, {\"messages\": [(\"human\", query)]})\n",
" )\n",
" await asyncio.wait_for(task, timeout=3)\n",
"except TimeoutError:\n",
" print(\"Task Cancelled.\")"
Expand Down Expand Up @@ -1108,10 +1128,10 @@
"\n",
"RECURSION_LIMIT = 2 * 1 + 1\n",
"\n",
"app = create_react_agent(model, tools=tools)\n",
"langgraph_agent_executor = create_react_agent(model, tools=tools)\n",
"\n",
"try:\n",
" for chunk in app.stream(\n",
" for chunk in langgraph_agent_executor.stream(\n",
" {\"messages\": [(\"human\", query)]},\n",
" {\"recursion_limit\": RECURSION_LIMIT},\n",
" stream_mode=\"values\",\n",
Expand Down Expand Up @@ -1289,10 +1309,14 @@
" return [(\"system\", \"You are a helpful assistant\"), state[\"messages\"][0]]\n",
"\n",
"\n",
"app = create_react_agent(model, tools, state_modifier=_modify_state_messages)\n",
"langgraph_agent_executor = create_react_agent(\n",
" model, tools, state_modifier=_modify_state_messages\n",
")\n",
"\n",
"try:\n",
" for step in app.stream({\"messages\": [(\"human\", query)]}, stream_mode=\"updates\"):\n",
" for step in langgraph_agent_executor.stream(\n",
" {\"messages\": [(\"human\", query)]}, stream_mode=\"updates\"\n",
" ):\n",
" pass\n",
"except GraphRecursionError as e:\n",
" print(\"Stopping agent prematurely due to triggering stop condition\")"
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/integrations/document_loaders/airbyte.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@
"metadata": {},
"outputs": [],
"source": [
"% pip install -qU langchain-airbyte"
"%pip install -qU langchain-airbyte"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/integrations/document_loaders/browserbase.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@
"metadata": {},
"outputs": [],
"source": [
"% pip install browserbase"
"%pip install browserbase"
]
},
{
Expand Down
8 changes: 4 additions & 4 deletions docs/docs/integrations/document_loaders/upstage.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,9 @@
}
},
"source": [
"# UpstageLayoutAnalysisLoader\n",
"# UpstageDocumentParseLoader\n",
"\n",
"This notebook covers how to get started with `UpstageLayoutAnalysisLoader`.\n",
"This notebook covers how to get started with `UpstageDocumentParseLoader`.\n",
"\n",
"## Installation\n",
"\n",
Expand Down Expand Up @@ -89,10 +89,10 @@
}
],
"source": [
"from langchain_upstage import UpstageLayoutAnalysisLoader\n",
"from langchain_upstage import UpstageDocumentParseLoader\n",
"\n",
"file_path = \"/PATH/TO/YOUR/FILE.pdf\"\n",
"layzer = UpstageLayoutAnalysisLoader(file_path, split=\"page\")\n",
"layzer = UpstageDocumentParseLoader(file_path, split=\"page\")\n",
"\n",
"# For improved memory efficiency, consider using the lazy_load method to load documents page by page.\n",
"docs = layzer.load() # or layzer.lazy_load()\n",
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/integrations/graphs/azure_cosmosdb_gremlin.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@
"outputs": [],
"source": [
"graph = GremlinGraph(\n",
" url=f\"=wss://{cosmosdb_name}.gremlin.cosmos.azure.com:443/\",\n",
" url=f\"wss://{cosmosdb_name}.gremlin.cosmos.azure.com:443/\",\n",
" username=f\"/dbs/{cosmosdb_db_id}/colls/{cosmosdb_db_graph_id}\",\n",
" password=cosmosdb_access_Key,\n",
")"
Expand Down
Loading

0 comments on commit 592745e

Please sign in to comment.