You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I added a local ollama LLM Provider in the settings following the documentation adding process. The problem is that when I try to chat with any custom model it says "No response generated" and I don't know where to look and check URL.
Also, I used a python code to access http://localhost:11434/v1/completions and it works, but when using aperag it doesn't even appear in the terminal with ollama running.
Screenshots & Logs
If applicable, add screenshots to help explain your problem.