Skip to content

Commit 1b0bed8

Browse files
authored
Merge pull request #53 from donvito/feature/llamacpp-support
add LlamaCpp provider details to README and configure base URL
2 parents bfdd763 + 03287f3 commit 1b0bed8

File tree

1 file changed

+6
-0
lines changed

1 file changed

+6
-0
lines changed

README.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -47,6 +47,7 @@ More to come...check swagger docs for updated endpoints.
4747
| [Anthropic](https://www.anthropic.com/) | Claude models | Available |
4848
| [OpenRouter](https://openrouter.ai/) | Open source and private models | Available |
4949
| [Vercel AI Gateway](https://vercel.com/ai-gateway) | Open source and private models | Available |
50+
| [LlamaCpp](https://github.com/ggml-org/llama.cpp) | Local models via llama.cpp server (self-hosted) | Available |
5051
| [Google](https://ai.google.dev/) | Gemini models | In Progress |
5152

5253

@@ -149,6 +150,11 @@ OLLAMA_TIMEOUT=30000
149150
150151
# You can change OLLAMA_BASE_URL to use a remote Ollama instance
151152
153+
# LlamaCpp Configuration
154+
LLAMACPP_BASE_URL=http://localhost:8080
155+
156+
# You can change LLAMACPP_BASE_URL to use a remote LlamaCpp instance
157+
152158
# LM Studio Configuration
153159
LMSTUDIO_ENABLED=true
154160
LMSTUDIO_BASE_URL=http://localhost:1234

0 commit comments

Comments
 (0)