-
Notifications
You must be signed in to change notification settings - Fork 234
Open
Description
During indexing, MiniRAG.ainsert reprocesses every document marked as processed, recreates their chunks, and sends them back to extract_entities. Because extract_entities has no cache keyed by the chunk hash, the wrapped LLM function is called again for content that was handled in previous runs. This contradicts the intended “cache results per chunk” behavior and causes redundant LLM/SLM traffic.
minirag/minirag.py
inserting_chunks = {
compute_mdhash_id(dp["content"], prefix="chunk-"): {
**dp,
"full_doc_id": doc_id,
}
for doc_id, status_doc in (
await self.doc_status.get_docs_by_status(DocStatus.PROCESSED)
).items()
for dp in self.chunking_func(
status_doc.content,
self.chunk_overlap_token_size,
self.chunk_token_size,
self.tiktoken_model_name,
)
}
if inserting_chunks:
logger.info("Performing entity extraction on newly processed chunks")
await extract_entities(
inserting_chunks,
knowledge_graph_inst=self.chunk_entity_relation_graph,
entity_vdb=self.entities_vdb,
entity_name_vdb=self.entity_name_vdb,
relationships_vdb=self.relationships_vdb,
global_config=asdict(self),
)
minirag/operate.py
async def extract_entities(
chunks: dict[str, TextChunkSchema],
knowledge_graph_inst: BaseGraphStorage,
entity_vdb: BaseVectorStorage,
entity_name_vdb: BaseVectorStorage,
relationships_vdb: BaseVectorStorage,
global_config: dict,
) -> Union[BaseGraphStorage, None]:
use_llm_func: callable = global_config["llm_model_func"]
...
hint_prompt = entity_extract_prompt.format(**context_base, input_text=content)
final_result = await use_llm_func(hint_prompt)
...
glean_result = await use_llm_func(continue_prompt, history_messages=history)
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels