-
Notifications
You must be signed in to change notification settings - Fork 627
support local llm server #10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Agree, adding support for Ollama would be amazing. Is it just about changing the base_url to whatever my Ollama server URL is? |
no it's not. API format is quite different as well as model option params. I might make pr for this |
Please do. AgentLab would be super useful air gapped networks where we can't/won't run OpenAI or DeepSeek but have Ollama instances running. |
Yes. it can be. You can start with "ollama serve." And then use openAI API to communicate with it. But I found using llama.cpp directly would be more helpful, you can configure the parameters when you start the "llama-server" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think line 193, 196 and 199 in agents.py should be self.base_url instead of self.base.
No description provided.