Skip to content

Conversation

dishuostec
Copy link

@dishuostec dishuostec commented Oct 12, 2025

Now we can use Ollama as a embedding provider as blow. (fix #138)

enable_event_embedding: true
embedding_provider: "ollama"
embedding_api_key: "ollama"
embedding_base_url: "http://127.0.0.1:11434/" # WITHOUT "v1" at the end
embedding_dim: 2560
embedding_model: "qwen3-embedding:4b-q4_K_M"

@gusye1234
Copy link
Contributor

LGTM!

Can you also add example config.yaml for ollama embedding?

@dishuostec
Copy link
Author

It's done.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Doesn't work without OpenAI

2 participants