Skip to content

Fix stale embedding client during knowledge base indexing#416

Open
stellanhou wants to merge 1 commit intoHKUDS:mainfrom
stellanhou:fix/stale-embedding-client
Open

Fix stale embedding client during knowledge base indexing#416
stellanhou wants to merge 1 commit intoHKUDS:mainfrom
stellanhou:fix/stale-embedding-client

Conversation

@stellanhou
Copy link
Copy Markdown

Summary

This PR fixes a stale embedding client issue during knowledge base initialization.

When users change the active embedding provider/model, diagnostics can read the updated configuration correctly, but LlamaIndex indexing may still reuse a previously cached embedding client. As a result, indexing can fall back to the old/default embedding provider.

This change:

  • resets the cached embedding client when configuring LlamaIndex embedding settings
  • refreshes LlamaIndex settings before knowledge base initialization

Test

Manually verified that knowledge base initialization uses the active SiliconFlow embedding configuration instead of falling back to OpenAI text-embedding-3-large.

The issue was reproduced locally when switching from the default OpenAI embedding configuration to SiliconFlow BAAI/bge-m3.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant