Skip to content

Commit f38616d

Browse files
authored
docs: add local server config guidance (#25)
1 parent 8d16c10 commit f38616d

2 files changed

Lines changed: 22 additions & 0 deletions

File tree

README.md

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -84,6 +84,25 @@ enabled = true
8484
api_token = "${TELEGRAM_BOT_TOKEN}"
8585
```
8686

87+
### Using a local OpenAI-compatible server (LM Studio, llamafile, etc.)
88+
89+
If you run a local server that speaks the OpenAI API (e.g., LM Studio, llamafile, vLLM), point LocalGPT at it and pick an `openai/*` model ID so it does **not** try to spawn the `claude` CLI:
90+
91+
1. Start your server (LM Studio default port: `1234`; llamafile default: `8080`) and note its model name.
92+
2. Edit `~/.localgpt/config.toml`:
93+
```toml
94+
[agent]
95+
default_model = "openai/<your-model-name>"
96+
97+
[providers.openai]
98+
# Many local servers accept a dummy key
99+
api_key = "not-needed"
100+
base_url = "http://127.0.0.1:8080/v1" # or http://127.0.0.1:1234/v1 for LM Studio
101+
```
102+
3. Run `localgpt chat` (or `localgpt daemon start`) and requests will go to your local server.
103+
104+
Tip: If you see `Failed to spawn Claude CLI`, change `agent.default_model` away from `claude-cli/*` or install the `claude` CLI.
105+
87106
## Telegram Bot
88107

89108
Access LocalGPT from Telegram with full chat, tool use, and memory support.

config.example.toml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -45,6 +45,9 @@ base_url = "https://api.anthropic.com"
4545
# [providers.openai]
4646
# api_key = "${OPENAI_API_KEY}"
4747
# base_url = "https://api.openai.com/v1"
48+
# For local OpenAI-compatible servers (LM Studio, llamafile, vLLM):
49+
# api_key = "not-needed"
50+
# base_url = "http://127.0.0.1:8080/v1" # LM Studio default is http://127.0.0.1:1234/v1
4851

4952
# Ollama configuration (for local models)
5053
# [providers.ollama]

0 commit comments

Comments
 (0)