Do you need to file an issue?
Describe the bug
Hello,
Current situation
When I use the rag.query in LightRAG Vanilla with the parameter response_type="Multiple Paragraphs" there is a bug if the response is too long.
Steps to reproduce
Take a document with a long description, initialize LightRAG vanilla and ask to repeat the description.
Then launch:
rag = LightRAG(
working_dir=WORKING_DIR,
llm_model_func=llm_model,
embedding_func=EmbeddingFunc(
embedding_dim=768,
max_token_size=8192,
func=embedding,
),
)
with open(name_of_element_, "r") as file:
text = file.read()
# Inserting the text into LightRAG
rag.insert(text)
rag.query(
query=query,
param=QueryParam(
mode="mix",
top_k=5,
response_type="Multiple Paragraphs"
),
)
Expected Behavior
raise ValueError(f"Failed to parse CSV string: {str(e)}")
ValueError: Failed to parse CSV string: field larger than field limit (131072)
LightRAG Config Used
Paste your config here
rag = LightRAG(
working_dir=WORKING_DIR,
llm_model_func=llm_model,
embedding_func=EmbeddingFunc(
embedding_dim=768,
max_token_size=8192,
func=embedding,
),
)
Logs and screenshots
No response
Additional Information
- LightRAG Version: 1.3.2
- Operating System:
- Python Version: 3.11.12
- Related Issues:
Do you need to file an issue?
Describe the bug
Hello,
Current situation
When I use the
rag.queryin LightRAG Vanilla with the parameterresponse_type="Multiple Paragraphs"there is a bug if the response is too long.Steps to reproduce
Take a document with a long description, initialize LightRAG vanilla and ask to repeat the description.
Then launch:
Expected Behavior
LightRAG Config Used
Paste your config here
rag = LightRAG(
working_dir=WORKING_DIR,
llm_model_func=llm_model,
embedding_func=EmbeddingFunc(
embedding_dim=768,
max_token_size=8192,
func=embedding,
),
)
Logs and screenshots
No response
Additional Information