Skip to content
This repository was archived by the owner on Mar 19, 2025. It is now read-only.

Commit df09629

Browse files
committed
Add caching to prevent redundant calls to LLMs.
1 parent 1702159 commit df09629

File tree

3 files changed

+7
-2
lines changed

3 files changed

+7
-2
lines changed

.gitignore

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
.*_cache
1+
.*_cache*
22
.coverage*
33
.DS_Store
44
.venv

pyproject.toml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,8 @@
22
name = "Summarizer"
33
version = "0.1"
44
dependencies = [
5-
"langchain-openai == 0.0.8",
5+
"langchain == 0.1.13",
6+
"langchain-openai == 0.1.1",
67
]
78

89
[project.scripts]

src/summarizer/app.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,14 @@
11
"""Main module for the application."""
22

3+
from langchain.cache import SQLiteCache
4+
from langchain.globals import set_llm_cache
5+
36
from summarizer.cli import create_argument_parser
47
from summarizer.summarize import summarize_path
58

69

710
def main() -> None:
811
"""Run the main program."""
12+
set_llm_cache(SQLiteCache(database_path=".summarizer_cache.db"))
913
args = create_argument_parser().parse_args()
1014
print(summarize_path(args.path))

0 commit comments

Comments
 (0)