Skip to content

Commit 39e5cbf

Browse files
authored
Support for langchain LLMs (#2506)
1 parent d30c78c commit 39e5cbf

File tree

9 files changed

+393
-1
lines changed

9 files changed

+393
-1
lines changed

docs/components/llms/config.mdx

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -109,6 +109,7 @@ Here's a comprehensive list of all parameters that can be used across different
109109
| `deepseek_base_url` | Base URL for DeepSeek API | DeepSeek |
110110
| `xai_base_url` | Base URL for XAI API | XAI |
111111
| `lmstudio_base_url` | Base URL for LM Studio API | LM Studio |
112+
| `langchain_provider` | Provider for Langchain | Langchain |
112113
</Tab>
113114
<Tab title="TypeScript">
114115
| Parameter | Description | Provider |
Lines changed: 72 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,72 @@
1+
---
2+
title: LangChain
3+
---
4+
5+
Mem0 supports LangChain as a provider to access a wide range of LLM models. LangChain is a framework for developing applications powered by language models, making it easy to integrate various LLM providers through a consistent interface.
6+
7+
For a complete list of available chat models supported by LangChain, refer to the [LangChain Chat Models documentation](https://python.langchain.com/docs/integrations/chat).
8+
9+
## Usage
10+
11+
<CodeGroup>
12+
```python Python
13+
import os
14+
from mem0 import Memory
15+
16+
# Set necessary environment variables for your chosen LangChain provider
17+
# For example, if using OpenAI through LangChain:
18+
os.environ["OPENAI_API_KEY"] = "your-api-key"
19+
20+
config = {
21+
"llm": {
22+
"provider": "langchain",
23+
"config": {
24+
"langchain_provider": "OpenAI",
25+
"model": "gpt-4o",
26+
"temperature": 0.2,
27+
"max_tokens": 2000,
28+
}
29+
}
30+
}
31+
32+
m = Memory.from_config(config)
33+
messages = [
34+
{"role": "user", "content": "I'm planning to watch a movie tonight. Any recommendations?"},
35+
{"role": "assistant", "content": "How about a thriller movies? They can be quite engaging."},
36+
{"role": "user", "content": "I'm not a big fan of thriller movies but I love sci-fi movies."},
37+
{"role": "assistant", "content": "Got it! I'll avoid thriller recommendations and suggest sci-fi movies in the future."}
38+
]
39+
m.add(messages, user_id="alice", metadata={"category": "movies"})
40+
```
41+
</CodeGroup>
42+
43+
## Supported LangChain Providers
44+
45+
LangChain supports a wide range of LLM providers, including:
46+
47+
- OpenAI (`ChatOpenAI`)
48+
- Anthropic (`ChatAnthropic`)
49+
- Google (`ChatGoogleGenerativeAI`, `ChatGooglePalm`)
50+
- Mistral (`ChatMistralAI`)
51+
- Ollama (`ChatOllama`)
52+
- Azure OpenAI (`AzureChatOpenAI`)
53+
- HuggingFace (`HuggingFaceChatEndpoint`)
54+
- And many more
55+
56+
You can specify any supported provider in the `langchain_provider` parameter. For a complete and up-to-date list of available providers, refer to the [LangChain Chat Models documentation](https://python.langchain.com/docs/integrations/chat).
57+
58+
## Provider-Specific Configuration
59+
60+
When using LangChain as a provider, you'll need to:
61+
62+
1. Set the appropriate environment variables for your chosen LLM provider
63+
2. Specify the LangChain provider class name in the `langchain_provider` parameter
64+
3. Include any additional configuration parameters required by the specific provider
65+
66+
<Note>
67+
Make sure to install the necessary LangChain packages and any provider-specific dependencies.
68+
</Note>
69+
70+
## Config
71+
72+
All available parameters for the `langchain` config are present in [Master List of All Params in Config](../config).

docs/components/llms/overview.mdx

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,7 @@ To view all supported llms, visit the [Supported LLMs](./models).
3333
<Card title="DeepSeek" href="/components/llms/models/deepseek" />
3434
<Card title="xAI" href="/components/llms/models/xAI" />
3535
<Card title="LM Studio" href="/components/llms/models/lmstudio" />
36+
<Card title="Langchain" href="/components/llms/models/langchain" />
3637
</CardGroup>
3738

3839
## Structured vs Unstructured Outputs

docs/docs.json

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -111,7 +111,8 @@
111111
"components/llms/models/gemini",
112112
"components/llms/models/deepseek",
113113
"components/llms/models/xAI",
114-
"components/llms/models/lmstudio"
114+
"components/llms/models/lmstudio",
115+
"components/llms/models/langchain"
115116
]
116117
}
117118
]

mem0/configs/llms/base.py

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,8 @@ def __init__(
4141
xai_base_url: Optional[str] = None,
4242
# LM Studio specific
4343
lmstudio_base_url: Optional[str] = "http://localhost:1234/v1",
44+
# Langchain specific
45+
langchain_provider: Optional[str] = None,
4446
):
4547
"""
4648
Initializes a configuration class instance for the LLM.
@@ -87,6 +89,8 @@ def __init__(
8789
:type xai_base_url: Optional[str], optional
8890
:param lmstudio_base_url: LM Studio base URL to be use, defaults to "http://localhost:1234/v1"
8991
:type lmstudio_base_url: Optional[str], optional
92+
:param langchain_provider: Langchain provider to be use, defaults to None
93+
:type langchain_provider: Optional[str], optional
9094
"""
9195

9296
self.model = model
@@ -123,3 +127,6 @@ def __init__(
123127

124128
# LM Studio specific
125129
self.lmstudio_base_url = lmstudio_base_url
130+
131+
# Langchain specific
132+
self.langchain_provider = langchain_provider

mem0/llms/configs.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,7 @@ def validate_config(cls, v, values):
2525
"deepseek",
2626
"xai",
2727
"lmstudio",
28+
"langchain",
2829
):
2930
return v
3031
else:

mem0/llms/langchain.py

Lines changed: 208 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,208 @@
1+
from typing import Dict, List, Optional
2+
import enum
3+
4+
from mem0.configs.llms.base import BaseLlmConfig
5+
from mem0.llms.base import LLMBase
6+
7+
# Default import for langchain_community
8+
try:
9+
from langchain_community import chat_models
10+
except ImportError:
11+
raise ImportError("langchain_community not found. Please install it with `pip install langchain-community`")
12+
13+
# Provider-specific package mapping
14+
PROVIDER_PACKAGES = {
15+
# "Anthropic": "langchain_anthropic", # Special handling for Anthropic with Pydantic v2
16+
"MistralAI": "langchain_mistralai",
17+
"Fireworks": "langchain_fireworks",
18+
"AzureOpenAI": "langchain_openai",
19+
"OpenAI": "langchain_openai",
20+
"Together": "langchain_together",
21+
"VertexAI": "langchain_google_vertexai",
22+
"GoogleAI": "langchain_google_genai",
23+
"Groq": "langchain_groq",
24+
"Cohere": "langchain_cohere",
25+
"Bedrock": "langchain_aws",
26+
"HuggingFace": "langchain_huggingface",
27+
"NVIDIA": "langchain_nvidia_ai_endpoints",
28+
"Ollama": "langchain_ollama",
29+
"AI21": "langchain_ai21",
30+
"Upstage": "langchain_upstage",
31+
"Databricks": "databricks_langchain",
32+
"Watsonx": "langchain_ibm",
33+
"xAI": "langchain_xai",
34+
"Perplexity": "langchain_perplexity",
35+
}
36+
37+
38+
class LangchainProvider(enum.Enum):
39+
Abso = "ChatAbso"
40+
AI21 = "ChatAI21"
41+
Alibaba = "ChatAlibabaCloud"
42+
Anthropic = "ChatAnthropic"
43+
Anyscale = "ChatAnyscale"
44+
AzureAIChatCompletionsModel = "AzureAIChatCompletionsModel"
45+
AzureOpenAI = "AzureChatOpenAI"
46+
AzureMLEndpoint = "ChatAzureMLEndpoint"
47+
Baichuan = "ChatBaichuan"
48+
Qianfan = "ChatQianfan"
49+
Bedrock = "ChatBedrock"
50+
Cerebras = "ChatCerebras"
51+
CloudflareWorkersAI = "ChatCloudflareWorkersAI"
52+
Cohere = "ChatCohere"
53+
ContextualAI = "ChatContextualAI"
54+
Coze = "ChatCoze"
55+
Dappier = "ChatDappier"
56+
Databricks = "ChatDatabricks"
57+
DeepInfra = "ChatDeepInfra"
58+
DeepSeek = "ChatDeepSeek"
59+
EdenAI = "ChatEdenAI"
60+
EverlyAI = "ChatEverlyAI"
61+
Fireworks = "ChatFireworks"
62+
Friendli = "ChatFriendli"
63+
GigaChat = "ChatGigaChat"
64+
Goodfire = "ChatGoodfire"
65+
GoogleAI = "ChatGoogleAI"
66+
VertexAI = "VertexAI"
67+
GPTRouter = "ChatGPTRouter"
68+
Groq = "ChatGroq"
69+
HuggingFace = "ChatHuggingFace"
70+
Watsonx = "ChatWatsonx"
71+
Jina = "ChatJina"
72+
Kinetica = "ChatKinetica"
73+
Konko = "ChatKonko"
74+
LiteLLM = "ChatLiteLLM"
75+
LiteLLMRouter = "ChatLiteLLMRouter"
76+
Llama2Chat = "Llama2Chat"
77+
LlamaAPI = "ChatLlamaAPI"
78+
LlamaEdge = "ChatLlamaEdge"
79+
LlamaCpp = "ChatLlamaCpp"
80+
Maritalk = "ChatMaritalk"
81+
MiniMax = "ChatMiniMax"
82+
MistralAI = "ChatMistralAI"
83+
MLX = "ChatMLX"
84+
ModelScope = "ChatModelScope"
85+
Moonshot = "ChatMoonshot"
86+
Naver = "ChatNaver"
87+
Netmind = "ChatNetmind"
88+
NVIDIA = "ChatNVIDIA"
89+
OCIModelDeployment = "ChatOCIModelDeployment"
90+
OCIGenAI = "ChatOCIGenAI"
91+
OctoAI = "ChatOctoAI"
92+
Ollama = "ChatOllama"
93+
OpenAI = "ChatOpenAI"
94+
Outlines = "ChatOutlines"
95+
Perplexity = "ChatPerplexity"
96+
Pipeshift = "ChatPipeshift"
97+
PredictionGuard = "ChatPredictionGuard"
98+
PremAI = "ChatPremAI"
99+
PromptLayerOpenAI = "PromptLayerChatOpenAI"
100+
QwQ = "ChatQwQ"
101+
Reka = "ChatReka"
102+
RunPod = "ChatRunPod"
103+
SambaNovaCloud = "ChatSambaNovaCloud"
104+
SambaStudio = "ChatSambaStudio"
105+
SeekrFlow = "ChatSeekrFlow"
106+
SnowflakeCortex = "ChatSnowflakeCortex"
107+
Solar = "ChatSolar"
108+
SparkLLM = "ChatSparkLLM"
109+
Nebula = "ChatNebula"
110+
Hunyuan = "ChatHunyuan"
111+
Together = "ChatTogether"
112+
TongyiQwen = "ChatTongyiQwen"
113+
Upstage = "ChatUpstage"
114+
Vectara = "ChatVectara"
115+
VLLM = "ChatVLLM"
116+
VolcEngine = "ChatVolcEngine"
117+
Writer = "ChatWriter"
118+
xAI = "ChatXAI"
119+
Xinference = "ChatXinference"
120+
Yandex = "ChatYandex"
121+
Yi = "ChatYi"
122+
Yuan2 = "ChatYuan2"
123+
ZhipuAI = "ChatZhipuAI"
124+
125+
126+
class LangchainLLM(LLMBase):
127+
def __init__(self, config: Optional[BaseLlmConfig] = None):
128+
super().__init__(config)
129+
130+
provider = self.config.langchain_provider
131+
if provider not in LangchainProvider.__members__:
132+
raise ValueError(f"Invalid provider: {provider}")
133+
model_name = LangchainProvider[provider].value
134+
135+
try:
136+
# Check if this provider needs a specialized package
137+
if provider in PROVIDER_PACKAGES:
138+
package_name = PROVIDER_PACKAGES[provider]
139+
try:
140+
# Import the model class directly from the package
141+
module_path = f"{package_name}"
142+
model_class = __import__(module_path, fromlist=[model_name])
143+
model_class = getattr(model_class, model_name)
144+
except ImportError:
145+
raise ImportError(
146+
f"Package {package_name} not found. " f"Please install it with `pip install {package_name}`"
147+
)
148+
except AttributeError:
149+
raise ImportError(f"Model {model_name} not found in {package_name}")
150+
else:
151+
# Use the default langchain_community module
152+
if not hasattr(chat_models, model_name):
153+
raise ImportError(f"Provider {provider} not found in langchain_community.chat_models")
154+
155+
model_class = getattr(chat_models, model_name)
156+
157+
# Initialize the model with relevant config parameters
158+
self.langchain_model = model_class(
159+
model=self.config.model,
160+
temperature=self.config.temperature,
161+
max_tokens=self.config.max_tokens,
162+
api_key=self.config.api_key,
163+
)
164+
except (ImportError, AttributeError, ValueError) as e:
165+
raise ImportError(f"Error setting up langchain model for provider {provider}: {str(e)}")
166+
167+
def generate_response(
168+
self,
169+
messages: List[Dict[str, str]],
170+
response_format=None,
171+
tools: Optional[List[Dict]] = None,
172+
tool_choice: str = "auto",
173+
):
174+
"""
175+
Generate a response based on the given messages using langchain_community.
176+
177+
Args:
178+
messages (list): List of message dicts containing 'role' and 'content'.
179+
response_format (str or object, optional): Format of the response. Not used in Langchain.
180+
tools (list, optional): List of tools that the model can call. Not used in Langchain.
181+
tool_choice (str, optional): Tool choice method. Not used in Langchain.
182+
183+
Returns:
184+
str: The generated response.
185+
"""
186+
try:
187+
# Convert the messages to LangChain's tuple format
188+
langchain_messages = []
189+
for message in messages:
190+
role = message["role"]
191+
content = message["content"]
192+
193+
if role == "system":
194+
langchain_messages.append(("system", content))
195+
elif role == "user":
196+
langchain_messages.append(("human", content))
197+
elif role == "assistant":
198+
langchain_messages.append(("ai", content))
199+
200+
if not langchain_messages:
201+
raise ValueError("No valid messages found in the messages list")
202+
203+
ai_message = self.langchain_model.invoke(langchain_messages)
204+
205+
return ai_message.content
206+
207+
except Exception as e:
208+
raise Exception(f"Error generating response using langchain model: {str(e)}")

mem0/utils/factory.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,7 @@ class LlmFactory:
2626
"deepseek": "mem0.llms.deepseek.DeepSeekLLM",
2727
"xai": "mem0.llms.xai.XAILLM",
2828
"lmstudio": "mem0.llms.lmstudio.LMStudioLLM",
29+
"langchain": "mem0.llms.langchain.LangchainLLM",
2930
}
3031

3132
@classmethod

0 commit comments

Comments
 (0)