Skip to content

Commit 175a726

Browse files
committed
Adds comparisons to langchain
This adds to the docs so that we can more easily and clearly show people side by side comparison of code. Also adds a section to the LLM Workflows folder to make this comparison accessible there.
1 parent 144b6db commit 175a726

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

62 files changed

+1700
-3
lines changed

.pre-commit-config.yaml

+6-2
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ repos:
99
rev: 23.11.0
1010
hooks:
1111
- id: black
12-
args: [--line-length=100]
12+
args: [--line-length=100, --exclude=docs/*]
1313
- repo: https://github.com/pre-commit/pre-commit-hooks
1414
rev: v4.5.0
1515
hooks:
@@ -25,7 +25,11 @@ repos:
2525
rev: '5.12.0'
2626
hooks:
2727
- id: isort
28-
args: ["--profile", "black", "--line-length=100", "--known-local-folder", "tests", "-p", "hamilton"]
28+
args: ["--profile", "black",
29+
"--line-length=100",
30+
"--extend-skip=docs/*/*/*.py",
31+
"--known-local-folder",
32+
"tests", "-p", "hamilton"]
2933
- repo: https://github.com/pycqa/flake8
3034
rev: 6.1.0
3135
hooks:

docs/code-comparisons/airflow.rst

+5
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
======================
2+
Airflow
3+
======================
4+
5+
Check back soon!

docs/code-comparisons/index.rst

+9
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
================
2+
Code Comparisons
3+
================
4+
5+
This section showcases what Hamilton code looks like in comparison to other popular libraries and frameworks.
6+
7+
.. toctree::
8+
langchain
9+
airflow

docs/code-comparisons/langchain.rst

+204
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,204 @@
1+
======================
2+
LangChain
3+
======================
4+
5+
Here we have some code snippets that help compare a vanilla code implementation
6+
with LangChain and Hamilton.
7+
8+
LangChain's focus is on hiding details and making code terse.
9+
10+
Hamilton's focus instead is on making code more readable, maintainable, and importantly customizeable.
11+
12+
13+
So don't be surprised that Hamilton's code is "longer" - that's by design. There is
14+
also little abstraction between you, and the underlying libraries with Hamilton.
15+
With LangChain they're abstracted away, so you can't really see easily what's going on
16+
underneath.
17+
18+
*Rhetorical question*: which code would you rather maintain, change, and update?
19+
20+
----------------------
21+
A simple joke example
22+
----------------------
23+
24+
.. table:: Simple Invocation
25+
:align: left
26+
27+
+-----------------------------------------------------------+----------------------------------------------------------+-------------------------------------------------------------+
28+
| Hamilton | Vanilla | LangChain |
29+
+===========================================================+==========================================================+=============================================================+
30+
| .. literalinclude:: langchain_snippets/hamilton_invoke.py | .. literalinclude:: langchain_snippets/vanilla_invoke.py | .. literalinclude:: langchain_snippets/lcel_invoke.py |
31+
| | | |
32+
+-----------------------------------------------------------+----------------------------------------------------------+-------------------------------------------------------------+
33+
34+
35+
.. figure:: langchain_snippets/hamilton-invoke.png
36+
:alt: Structure of the Hamilton DAG
37+
:align: center
38+
:width: 50%
39+
40+
The Hamilton DAG visualized.
41+
42+
-----------------------
43+
A streamed joke example
44+
-----------------------
45+
With Hamilton we can just swap the call function to return a streamed response.
46+
Note: you could use @config.when to include both streamed and non-streamed versions in the same DAG.
47+
48+
.. table:: Streamed Version
49+
:align: left
50+
51+
+-------------------------------------------------------------+------------------------------------------------------------+---------------------------------------------------------------+
52+
| Hamilton | Vanilla | LangChain |
53+
+=============================================================+============================================================+===============================================================+
54+
| .. literalinclude:: langchain_snippets/hamilton_streamed.py | .. literalinclude:: langchain_snippets/vanilla_streamed.py | .. literalinclude:: langchain_snippets/lcel_streamed.py |
55+
| | | |
56+
+-------------------------------------------------------------+------------------------------------------------------------+---------------------------------------------------------------+
57+
58+
59+
.. figure:: langchain_snippets/hamilton-streamed.png
60+
:alt: Structure of the Hamilton DAG
61+
:align: center
62+
:width: 50%
63+
64+
The Hamilton DAG visualized.
65+
66+
-------------------------------
67+
A "batch" parallel joke example
68+
-------------------------------
69+
In this batch example, the joke requests are parallelized.
70+
Note: with Hamilton you can delegate to many different backends for parallelization,
71+
e.g. Ray, Dask, etc. We use multi-threading here.
72+
73+
.. table:: Batch Parallel Version
74+
:align: left
75+
76+
+-------------------------------------------------------------+------------------------------------------------------------+---------------------------------------------------------------+
77+
| Hamilton | Vanilla | LangChain |
78+
+=============================================================+============================================================+===============================================================+
79+
| .. literalinclude:: langchain_snippets/hamilton_batch.py | .. literalinclude:: langchain_snippets/vanilla_batch.py | .. literalinclude:: langchain_snippets/lcel_batch.py |
80+
| | | |
81+
+-------------------------------------------------------------+------------------------------------------------------------+---------------------------------------------------------------+
82+
83+
84+
.. figure:: langchain_snippets/hamilton-batch.png
85+
:alt: Structure of the Hamilton DAG
86+
:align: center
87+
:width: 75%
88+
89+
The Hamilton DAG visualized.
90+
91+
----------------------
92+
A "async" joke example
93+
----------------------
94+
Here we show how to make the joke using async constructs. With Hamilton
95+
you can mix and match async and regular functions, the only change
96+
is that you need to use the async Hamilton Driver.
97+
98+
.. table:: Async Version
99+
:align: left
100+
101+
+-------------------------------------------------------------+------------------------------------------------------------+---------------------------------------------------------------+
102+
| Hamilton | Vanilla | LangChain |
103+
+=============================================================+============================================================+===============================================================+
104+
| .. literalinclude:: langchain_snippets/hamilton_async.py | .. literalinclude:: langchain_snippets/vanilla_async.py | .. literalinclude:: langchain_snippets/lcel_async.py |
105+
| | | |
106+
+-------------------------------------------------------------+------------------------------------------------------------+---------------------------------------------------------------+
107+
108+
109+
.. figure:: langchain_snippets/hamilton-async.png
110+
:alt: Structure of the Hamilton DAG
111+
:align: center
112+
:width: 50%
113+
114+
The Hamilton DAG visualized.
115+
116+
117+
---------------------------------
118+
Switch LLM to completion for joke
119+
---------------------------------
120+
Here we show how to make the joke switching to a different openAI model that is for completion.
121+
Note: we use the @config.when construct to augment the original DAG and add a new function
122+
that uses the different OpenAI model.
123+
124+
.. table:: Completion Version
125+
:align: left
126+
127+
+------------------------------------------------------------------+-----------------------------------------------------------------+---------------------------------------------------------------+
128+
| Hamilton | Vanilla | LangChain |
129+
+==================================================================+=================================================================+===============================================================+
130+
| .. literalinclude:: langchain_snippets/hamilton_completion.py | .. literalinclude:: langchain_snippets/vanilla_completion.py | .. literalinclude:: langchain_snippets/lcel_completion.py |
131+
| | | |
132+
+------------------------------------------------------------------+-----------------------------------------------------------------+---------------------------------------------------------------+
133+
134+
135+
.. figure:: langchain_snippets/hamilton-completion.png
136+
:alt: Structure of the Hamilton DAG
137+
:align: center
138+
:width: 50%
139+
140+
The Hamilton DAG visualized with configuration provided for the completion path. Note the dangling node - that's normal, it's not used in the completion path.
141+
142+
143+
---------------------------------
144+
Switch to using Anthropic
145+
---------------------------------
146+
Here we show how to make the joke switching to use a different model provider, in this case
147+
it's Anthropic.
148+
Note: we use the @config.when construct to augment the original DAG and add a new functions
149+
to use Anthropic.
150+
151+
.. table:: Anthropic Version
152+
:align: left
153+
154+
+------------------------------------------------------------------+-----------------------------------------------------------------+---------------------------------------------------------------+
155+
| Hamilton | Vanilla | LangChain |
156+
+==================================================================+=================================================================+===============================================================+
157+
| .. literalinclude:: langchain_snippets/hamilton_anthropic.py | .. literalinclude:: langchain_snippets/vanilla_anthropic.py | .. literalinclude:: langchain_snippets/lcel_anthropic.py |
158+
| | | |
159+
+------------------------------------------------------------------+-----------------------------------------------------------------+---------------------------------------------------------------+
160+
161+
162+
.. figure:: langchain_snippets/hamilton-anthropic.png
163+
:alt: Structure of the Hamilton DAG
164+
:align: center
165+
:width: 50%
166+
167+
The Hamilton DAG visualized with configuration provided to use Anthropic.
168+
169+
170+
---------------------------------
171+
Logging
172+
---------------------------------
173+
Here we show how to log more information about the joke request. Hamilton has
174+
lots of customization options, and one out of the box is to log more information via
175+
printing.
176+
177+
.. table:: Logging
178+
:align: left
179+
180+
+------------------------------------------------------------------+-----------------------------------------------------------------+---------------------------------------------------------------+
181+
| Hamilton | Vanilla | LangChain |
182+
+==================================================================+=================================================================+===============================================================+
183+
| .. literalinclude:: langchain_snippets/hamilton_logging.py | .. literalinclude:: langchain_snippets/vanilla_logging.py | .. literalinclude:: langchain_snippets/lcel_logging.py |
184+
| | | |
185+
+------------------------------------------------------------------+-----------------------------------------------------------------+---------------------------------------------------------------+
186+
187+
188+
---------------------------------
189+
Fallbacks
190+
---------------------------------
191+
Fallbacks are pretty situation and context dependent. It's not that
192+
hard to wrap a function in a try/except block. The key is to make sure
193+
you know what's going on, and that a fallback was triggered. So in our
194+
opinion it's better to be explicit about it.
195+
196+
.. table:: Logging
197+
:align: left
198+
199+
+------------------------------------------------------------------+-----------------------------------------------------------------+---------------------------------------------------------------+
200+
| Hamilton | Vanilla | LangChain |
201+
+==================================================================+=================================================================+===============================================================+
202+
| .. literalinclude:: langchain_snippets/hamilton_fallbacks.py | .. literalinclude:: langchain_snippets/vanilla_fallbacks.py | .. literalinclude:: langchain_snippets/lcel_fallbacks.py |
203+
| | | |
204+
+------------------------------------------------------------------+-----------------------------------------------------------------+---------------------------------------------------------------+
Loading
Loading
Loading
Loading
Loading
Loading
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,81 @@
1+
# hamilton_anthropic.py
2+
import anthropic
3+
import openai
4+
5+
from hamilton.function_modifiers import config
6+
7+
8+
@config.when(provider="openai")
9+
def llm_client__openai() -> openai.OpenAI:
10+
return openai.OpenAI()
11+
12+
13+
@config.when(provider="anthropic")
14+
def llm_client__anthropic() -> anthropic.Anthropic:
15+
return anthropic.Anthropic()
16+
17+
18+
def joke_prompt(topic: str) -> str:
19+
return (
20+
"Human:\n\n"
21+
"Tell me a short joke about {topic}\n\n"
22+
"Assistant:"
23+
).format(topic=topic)
24+
25+
26+
@config.when(provider="openai")
27+
def joke_response__openai(
28+
llm_client: openai.OpenAI,
29+
joke_prompt: str) -> str:
30+
response = llm_client.completions.create(
31+
model="gpt-3.5-turbo-instruct",
32+
prompt=joke_prompt,
33+
)
34+
return response.choices[0].text
35+
36+
37+
@config.when(provider="anthropic")
38+
def joke_response__anthropic(
39+
llm_client: anthropic.Anthropic,
40+
joke_prompt: str) -> str:
41+
response = llm_client.completions.create(
42+
model="claude-2",
43+
prompt=joke_prompt,
44+
max_tokens_to_sample=256
45+
)
46+
return response.completion
47+
48+
49+
if __name__ == "__main__":
50+
import hamilton_invoke_anthropic
51+
52+
from hamilton import driver
53+
54+
dr = (
55+
driver.Builder()
56+
.with_modules(hamilton_invoke_anthropic)
57+
.with_config({"provider": "anthropic"})
58+
.build()
59+
)
60+
dr.display_all_functions(
61+
"hamilton-anthropic.png"
62+
)
63+
print(
64+
dr.execute(
65+
["joke_response"],
66+
inputs={"topic": "ice cream"}
67+
)
68+
)
69+
70+
dr = (
71+
driver.Builder()
72+
.with_modules(hamilton_invoke_anthropic)
73+
.with_config({"provider": "openai"})
74+
.build()
75+
)
76+
print(
77+
dr.execute(
78+
["joke_response"],
79+
inputs={"topic": "ice cream"}
80+
)
81+
)
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
# hamilton_async.py
2+
from typing import List
3+
4+
import openai
5+
6+
7+
def llm_client() -> openai.AsyncOpenAI:
8+
return openai.AsyncOpenAI()
9+
10+
11+
def joke_prompt(topic: str) -> str:
12+
return (
13+
f"Tell me a short joke about {topic}"
14+
)
15+
16+
17+
def joke_messages(
18+
joke_prompt: str) -> List[dict]:
19+
return [{"role": "user",
20+
"content": joke_prompt}]
21+
22+
23+
async def joke_response(
24+
llm_client: openai.AsyncOpenAI,
25+
joke_messages: List[dict]) -> str:
26+
response = await (
27+
llm_client.chat.completions.create(
28+
model="gpt-3.5-turbo",
29+
messages=joke_messages,
30+
)
31+
)
32+
return response.choices[0].message.content
33+
34+
35+
if __name__ == "__main__":
36+
import asyncio
37+
38+
import hamilton_async
39+
40+
from hamilton import base
41+
from hamilton.experimental import h_async
42+
43+
dr = h_async.AsyncDriver(
44+
{},
45+
hamilton_async,
46+
result_builder=base.DictResult()
47+
)
48+
dr.display_all_functions("hamilton-async.png")
49+
loop = asyncio.get_event_loop()
50+
result = loop.run_until_complete(
51+
dr.execute(
52+
["joke_response"],
53+
inputs={"topic": "ice cream"}
54+
)
55+
)
56+
print(result)

0 commit comments

Comments
 (0)