Skip to content

Commit 9dfa9b4

Browse files
authored
Add langchain embedding, update langchain LLM and version bump -> 0.1.84 (#2510)
1 parent 5509066 commit 9dfa9b4

File tree

14 files changed

+266
-253
lines changed

14 files changed

+266
-253
lines changed

docs/changelog/overview.mdx

+9
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,15 @@ mode: "wide"
66
<Tabs>
77
<Tab title="Python">
88

9+
<Update label="2025-04-07" description="v0.1.84">
10+
11+
**New Features:**
12+
- **Langchain Embedder:** Added Langchain embedder integration
13+
14+
**Improvements:**
15+
- **Langchain LLM:** Updated Langchain LLM integration to directly pass the Langchain object LLM
16+
</Update>
17+
918
<Update label="2025-04-07" description="v0.1.83">
1019

1120
**Bug Fixes:**
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,120 @@
1+
---
2+
title: LangChain
3+
---
4+
5+
Mem0 supports LangChain as a provider to access a wide range of embedding models. LangChain is a framework for developing applications powered by language models, making it easy to integrate various embedding providers through a consistent interface.
6+
7+
For a complete list of available embedding models supported by LangChain, refer to the [LangChain Text Embedding documentation](https://python.langchain.com/docs/integrations/text_embedding/).
8+
9+
## Usage
10+
11+
<CodeGroup>
12+
```python Python
13+
import os
14+
from mem0 import Memory
15+
from langchain_openai import OpenAIEmbeddings
16+
17+
# Set necessary environment variables for your chosen LangChain provider
18+
os.environ["OPENAI_API_KEY"] = "your-api-key"
19+
20+
# Initialize a LangChain embeddings model directly
21+
openai_embeddings = OpenAIEmbeddings(
22+
model="text-embedding-3-small",
23+
dimensions=1536
24+
)
25+
26+
# Pass the initialized model to the config
27+
config = {
28+
"embedder": {
29+
"provider": "langchain",
30+
"config": {
31+
"model": openai_embeddings
32+
}
33+
}
34+
}
35+
36+
m = Memory.from_config(config)
37+
messages = [
38+
{"role": "user", "content": "I'm planning to watch a movie tonight. Any recommendations?"},
39+
{"role": "assistant", "content": "How about a thriller movies? They can be quite engaging."},
40+
{"role": "user", "content": "I'm not a big fan of thriller movies but I love sci-fi movies."},
41+
{"role": "assistant", "content": "Got it! I'll avoid thriller recommendations and suggest sci-fi movies in the future."}
42+
]
43+
m.add(messages, user_id="alice", metadata={"category": "movies"})
44+
```
45+
</CodeGroup>
46+
47+
## Supported LangChain Embedding Providers
48+
49+
LangChain supports a wide range of embedding providers, including:
50+
51+
- OpenAI (`OpenAIEmbeddings`)
52+
- Cohere (`CohereEmbeddings`)
53+
- Google (`VertexAIEmbeddings`)
54+
- Hugging Face (`HuggingFaceEmbeddings`)
55+
- Sentence Transformers (`HuggingFaceEmbeddings`)
56+
- Azure OpenAI (`AzureOpenAIEmbeddings`)
57+
- Ollama (`OllamaEmbeddings`)
58+
- Together (`TogetherEmbeddings`)
59+
- And many more
60+
61+
You can use any of these model instances directly in your configuration. For a complete and up-to-date list of available embedding providers, refer to the [LangChain Text Embedding documentation](https://python.langchain.com/docs/integrations/text_embedding/).
62+
63+
## Provider-Specific Configuration
64+
65+
When using LangChain as an embedder provider, you'll need to:
66+
67+
1. Set the appropriate environment variables for your chosen embedding provider
68+
2. Import and initialize the specific model class you want to use
69+
3. Pass the initialized model instance to the config
70+
71+
### Examples with Different Providers
72+
73+
#### HuggingFace Embeddings
74+
75+
```python
76+
from langchain_huggingface import HuggingFaceEmbeddings
77+
78+
# Initialize a HuggingFace embeddings model
79+
hf_embeddings = HuggingFaceEmbeddings(
80+
model_name="BAAI/bge-small-en-v1.5",
81+
encode_kwargs={"normalize_embeddings": True}
82+
)
83+
84+
config = {
85+
"embedder": {
86+
"provider": "langchain",
87+
"config": {
88+
"model": hf_embeddings
89+
}
90+
}
91+
}
92+
```
93+
94+
#### Ollama Embeddings
95+
96+
```python
97+
from langchain_ollama import OllamaEmbeddings
98+
99+
# Initialize an Ollama embeddings model
100+
ollama_embeddings = OllamaEmbeddings(
101+
model="nomic-embed-text"
102+
)
103+
104+
config = {
105+
"embedder": {
106+
"provider": "langchain",
107+
"config": {
108+
"model": ollama_embeddings
109+
}
110+
}
111+
}
112+
```
113+
114+
<Note>
115+
Make sure to install the necessary LangChain packages and any provider-specific dependencies.
116+
</Note>
117+
118+
## Config
119+
120+
All available parameters for the `langchain` embedder config are present in [Master List of All Params in Config](../config).

docs/components/embedders/overview.mdx

+1
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,7 @@ See the list of supported embedders below.
2323
<Card title="Vertex AI" href="/components/embedders/models/vertexai"></Card>
2424
<Card title="Together" href="/components/embedders/models/together"></Card>
2525
<Card title="LM Studio" href="/components/embedders/models/lmstudio"></Card>
26+
<Card title="Langchain" href="/components/embedders/models/langchain"></Card>
2627
</CardGroup>
2728

2829
## Usage

docs/components/llms/config.mdx

-1
Original file line numberDiff line numberDiff line change
@@ -109,7 +109,6 @@ Here's a comprehensive list of all parameters that can be used across different
109109
| `deepseek_base_url` | Base URL for DeepSeek API | DeepSeek |
110110
| `xai_base_url` | Base URL for XAI API | XAI |
111111
| `lmstudio_base_url` | Base URL for LM Studio API | LM Studio |
112-
| `langchain_provider` | Provider for Langchain | Langchain |
113112
</Tab>
114113
<Tab title="TypeScript">
115114
| Parameter | Description | Provider |

docs/components/llms/models/langchain.mdx

+13-8
Original file line numberDiff line numberDiff line change
@@ -12,19 +12,24 @@ For a complete list of available chat models supported by LangChain, refer to th
1212
```python Python
1313
import os
1414
from mem0 import Memory
15+
from langchain_openai import ChatOpenAI
1516

1617
# Set necessary environment variables for your chosen LangChain provider
17-
# For example, if using OpenAI through LangChain:
1818
os.environ["OPENAI_API_KEY"] = "your-api-key"
1919

20+
# Initialize a LangChain model directly
21+
openai_model = ChatOpenAI(
22+
model="gpt-4o",
23+
temperature=0.2,
24+
max_tokens=2000
25+
)
26+
27+
# Pass the initialized model to the config
2028
config = {
2129
"llm": {
2230
"provider": "langchain",
2331
"config": {
24-
"langchain_provider": "OpenAI",
25-
"model": "gpt-4o",
26-
"temperature": 0.2,
27-
"max_tokens": 2000,
32+
"model": openai_model
2833
}
2934
}
3035
}
@@ -53,15 +58,15 @@ LangChain supports a wide range of LLM providers, including:
5358
- HuggingFace (`HuggingFaceChatEndpoint`)
5459
- And many more
5560

56-
You can specify any supported provider in the `langchain_provider` parameter. For a complete and up-to-date list of available providers, refer to the [LangChain Chat Models documentation](https://python.langchain.com/docs/integrations/chat).
61+
You can use any of these model instances directly in your configuration. For a complete and up-to-date list of available providers, refer to the [LangChain Chat Models documentation](https://python.langchain.com/docs/integrations/chat).
5762

5863
## Provider-Specific Configuration
5964

6065
When using LangChain as a provider, you'll need to:
6166

6267
1. Set the appropriate environment variables for your chosen LLM provider
63-
2. Specify the LangChain provider class name in the `langchain_provider` parameter
64-
3. Include any additional configuration parameters required by the specific provider
68+
2. Import and initialize the specific model class you want to use
69+
3. Pass the initialized model instance to the config
6570

6671
<Note>
6772
Make sure to install the necessary LangChain packages and any provider-specific dependencies.

docs/docs.json

+2-1
Original file line numberDiff line numberDiff line change
@@ -161,7 +161,8 @@
161161
"components/embedders/models/vertexai",
162162
"components/embedders/models/gemini",
163163
"components/embedders/models/lmstudio",
164-
"components/embedders/models/together"
164+
"components/embedders/models/together",
165+
"components/embedders/models/langchain"
165166
]
166167
}
167168
]

mem0/configs/llms/base.py

+1-8
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ class BaseLlmConfig(ABC):
1313

1414
def __init__(
1515
self,
16-
model: Optional[str] = None,
16+
model: Optional[Union[str, Dict]] = None,
1717
temperature: float = 0.1,
1818
api_key: Optional[str] = None,
1919
max_tokens: int = 2000,
@@ -41,8 +41,6 @@ def __init__(
4141
xai_base_url: Optional[str] = None,
4242
# LM Studio specific
4343
lmstudio_base_url: Optional[str] = "http://localhost:1234/v1",
44-
# Langchain specific
45-
langchain_provider: Optional[str] = None,
4644
):
4745
"""
4846
Initializes a configuration class instance for the LLM.
@@ -89,8 +87,6 @@ def __init__(
8987
:type xai_base_url: Optional[str], optional
9088
:param lmstudio_base_url: LM Studio base URL to be use, defaults to "http://localhost:1234/v1"
9189
:type lmstudio_base_url: Optional[str], optional
92-
:param langchain_provider: Langchain provider to be use, defaults to None
93-
:type langchain_provider: Optional[str], optional
9490
"""
9591

9692
self.model = model
@@ -127,6 +123,3 @@ def __init__(
127123

128124
# LM Studio specific
129125
self.lmstudio_base_url = lmstudio_base_url
130-
131-
# Langchain specific
132-
self.langchain_provider = langchain_provider

mem0/embeddings/configs.py

+1
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,7 @@ def validate_config(cls, v, values):
2222
"vertexai",
2323
"together",
2424
"lmstudio",
25+
"langchain",
2526
]:
2627
return v
2728
else:

mem0/embeddings/langchain.py

+36
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
import os
2+
from typing import Literal, Optional
3+
4+
from mem0.configs.embeddings.base import BaseEmbedderConfig
5+
from mem0.embeddings.base import EmbeddingBase
6+
7+
try:
8+
from langchain.embeddings.base import Embeddings
9+
except ImportError:
10+
raise ImportError("langchain is not installed. Please install it using `pip install langchain`")
11+
12+
13+
class LangchainEmbedding(EmbeddingBase):
14+
def __init__(self, config: Optional[BaseEmbedderConfig] = None):
15+
super().__init__(config)
16+
17+
if self.config.model is None:
18+
raise ValueError("`model` parameter is required")
19+
20+
if not isinstance(self.config.model, Embeddings):
21+
raise ValueError("`model` must be an instance of Embeddings")
22+
23+
self.langchain_model = self.config.model
24+
25+
def embed(self, text, memory_action: Optional[Literal["add", "search", "update"]] = None):
26+
"""
27+
Get the embedding for the given text using Langchain.
28+
29+
Args:
30+
text (str): The text to embed.
31+
memory_action (optional): The type of embedding to use. Must be one of "add", "search", or "update". Defaults to None.
32+
Returns:
33+
list: The embedding vector.
34+
"""
35+
36+
return self.langchain_model.embed_query(text)

0 commit comments

Comments
 (0)