Replies: 4 comments
-
Yes, making a khoj with ollama support will be good at the same time we need to look using 8b or 7b models will help to balance the GPU and other works , 😄 because VRAM in my GPU is low |
Beta Was this translation helpful? Give feedback.
-
Well, they claim it is working already through the OpenAI compatible API that Ollama provides. But in practice I cannot make this work. |
Beta Was this translation helpful? Give feedback.
-
Hi folks, please open a bug report to track this. The bug report template asks for the requisite details (e.g which os, how did you install & configure khoj, errors observed etc.) to help find and fix the issue in using Khoj with Ollama. For clarity. Khoj has been tested to work with Ollama previously. Of course the integration could have broken since due to changes in Khoj or Ollama (or may not work with your setups). A detailed bug report would make it easier to find the issue. |
Beta Was this translation helpful? Give feedback.
-
I downloaded and configured Khoj with Ollama yesterday with no difficulty. Environmental variable OPENAI_BASE_URL with the IP of my Ollama instance, and leaving all API keys commented. |
Beta Was this translation helpful? Give feedback.
-
The Docs are quite vague (and probably outdated) on setting up Ollama Models with Khoj.
Where can i find these infos about a certain model, lets say Qwen2.5:14b?
https://ollama.com/library/qwen2.5
I tested accessing the OpenAI compatible API endpoint of my Ollama container via Curl from within the Khoj container, and I can access it. Khoj can't.
Does anyone successfully have Ollama and Khoj running or is the implementation on a "theoretically it should work" base?
Beta Was this translation helpful? Give feedback.
All reactions