GoDex is an AI coding agent that interfaces with Ollama, llama.cpp, Gemini, OpenRouter (and other LLM providers) through a TUI, with built-in MCP support.
Orchestration and parallel tasks? open another terminal tab and start a new instance of godex.
- Go 1.25.7+ - Build from source
- Ollama - For the default LLM backend (or use Gemini, OpenRouter, or llama.cpp)
- Requirements
- Installation
- Setting up providers
- Configuration
- Usage
- Building
- Running Securely with Docker
- Troubleshooting
curl -sSL https://raw.githubusercontent.com/cheikh2shift/godex/main/install.sh | shcurl -sSL https://raw.githubusercontent.com/cheikh2shift/godex/main/install-docker.sh | bashgit clone https://github.com/cheikh2shift/godex.git
cd godex
go build -o godex ./cmd/godex
sudo mv godex /usr/local/bin/-
Install Ollama: Follow instructions at https://github.com/ollama/ollama
-
Start Ollama server:
ollama serve
-
Pull a model (recommended: nemotron-3-super:cloud or minimax-m2.7:cloud):
ollama pull nemotron-3-super:cloud # or ollama pull minimax-m2.7:cloud -
Verify Ollama is running:
curl http://localhost:11434
Launch Godex and choose oauth as the form of authentication to automatically obtain an API key or:
-
Get an API key: Sign up at https://openrouter.ai/keys
-
Set the environment variable:
export OPENROUTER_API_KEY=sk-or-v1-... -
Run the wizard to configure:
godex --wizard
Select
openrouteras the provider type and choose from 100+ available models.
-
Install llama.cpp: Download from https://github.com/ggerganov/llama.cpp/releases or build from source
-
Ensure llama-server is in your PATH: The binary should be named
llama-serverand accessible from command line -
Run the wizard to configure:
godex --wizard
Select
llama.cppas the provider type. GoDex will automatically download models from Hugging Face or use local GGUF files. -
Using an external llama-server (optional):
# Start llama-server manually with jinja support for function calling llama-server -m models/your-model.gguf -fa -c 8192 --jinja # Connect godex to it godex --llama-server http://localhost:8080
GoDex reads provider configuration from ~/.godex/providers.yaml.
Download from GitHub Releases:
| OS | Architecture | File |
|---|---|---|
| Linux | AMD64 | godex-linux-amd64 |
| Linux | ARM64 | godex-linux-arm64 |
| macOS | AMD64 | godex-darwin-amd64 |
| macOS | ARM64 | godex-darwin-arm64 |
| Windows | AMD64 | godex-windows-amd64.exe |
Example:
Linux (AMD64):
curl -L -o godex https://github.com/cheikh2shift/godex/releases/latest/download/godex-linux-amd64
chmod +x godex
sudo mv godex /usr/local/bin/macOS (Intel):
curl -L -o godex https://github.com/cheikh2shift/godex/releases/latest/download/godex-darwin-amd64
chmod +x godex
sudo mv godex /usr/local/bin/macOS (Apple Silicon):
curl -L -o godex https://github.com/cheikh2shift/godex/releases/latest/download/godex-darwin-arm64
chmod +x godex
sudo mv godex /usr/local/bin/Run the wizard to generate the config:
godex --wizardCreate ~/.godex/providers.yaml:
providers:
- name: ollama
type: ollama
endpoint: http://localhost:11434
model: minimax-m2.5:cloud
description: Ollama with codeqwen
temperature: 0.2
mcp_servers:
- name: filesystem # enable file exploring
- name: bash # enable command execution
default_provider: ollamaDocker note: from inside the GoDex container, use http://ollama-proxy:11434 to reach the nginx proxy, or http://host.docker.internal:11434 to reach a host Ollama instance.
| Field | Description |
|---|---|
name |
Provider identifier |
type |
Provider type: ollama, llama.cpp, gemini or openrouter |
endpoint |
Base URL for provider (Ollama: http://localhost:11434, llama.cpp: http://localhost:8080, OpenRouter: https://openrouter.ai/api/v1) |
model |
Model name (e.g., nemotron-3-super:cloud, codellama, minimax-m2.7:cloud |
description |
Human-readable description |
temperature |
LLM temperature (0.0-1.0) |
max_tool_rounds |
Max tool call rounds (default: 10) |
tool_timeout |
Tool execution timeout in seconds (default: 180) |
api_key_env |
Environment variable for API key (Gemini/OpenRouter) |
api_key |
Direct API key (not recommended) |
mcp_servers |
List of MCP servers to enable |
context_limit |
Context window size in tokens (auto-detected for OpenRouter) |
GoDex includes built-in MCP servers:
| Server | Description |
|---|---|
filesystem |
Read, write, list directories, create/delete files |
bash |
Run shell commands, Python, Node.js |
webscraper |
Fetch URLs with JavaScript rendering, search HTML, extract links |
For detailed MCP configuration including external servers, see MCP.md.
GoDex supports a Hive network mode where multiple instances can delegate tasks to each other. See HIVE.md for details.
By default, MCP servers only allow access to the current working directory. Add more allowed paths:
mcp_servers:
- name: filesystem
allowed_paths:
- /home/user/project1
- /home/user/project2
- name: bash
allowed_paths:
- /home/user/project1
- name: webscraper
allowed_urls:
- https://example.com
- https://docs.example.com# Build from source (recommended)
go build -o godex ./cmd/godex
sudo mv godex /usr/local/bin/
# Or use install script (requires release)
curl -sSL https://raw.githubusercontent.com/cheikh2shift/godex/main/install.sh | sh# Run the TUI (uses default provider from config)
godex
# Run with custom config file
godex --config /path/to/providers.yaml
# Run with specific provider (must exist in config)
godex --provider ollama
# Run with custom config and specific provider
godex --config /path/to/providers.yaml --provider gemini
# Run a single prompt (non-interactive)
godex --prompt "list files in current directory"
# Run wizard to create config
godex --wizardEnable tab completion for godex commands and provider names:
Bash (add to ~/.bashrc):
source <(godex --completion bash)Zsh (add to ~/.zshrc):
source <(godex --completion zsh)Fish:
godex --completion fish | sourceAfter sourcing, pressing Tab will show:
- All available flags with descriptions
- Provider names when using
--provider - File paths when using
--config
/help- Show help/paths- Show allowed MCP paths/add-path <filesys|url> <path>- Add allowed path/tools- Show available MCP tools/commit <message>- Save current chat history (CVC)/commit-search <query>- Search commits (CVC)/commit-pull <ref>- Restore a commit (CVC)/commit-merge <ref>- Merge a commit into current state (CVC)/exitor/quit- Exit- Up/Down arrows - Command history
- Tab - Autocomplete
/commands
GoDex includes CVC (Chat Version Control) for saving and restoring conversation state. See CVC.md.
$ godex
GoDex - Connected to ollama (codeqwen)
MCP Servers: 2
> list files in this directory
[tool call: list_directory]
...
go build -o godex ./cmd/godex
./godexGoDex can be run in an isolated Docker container with a pre-configured sandbox environment containing common tools (Python, Node.js, Go, Rust, etc.).
Running GoDex in Docker provides:
- Isolation - GoDex operates only within the mounted workspace directory
- No host pollution - Tools and changes stay contained
- Consistent environment - Same tools available regardless of host system
- Safety - Test configurations without risking your host system
- First run - The container will launch the wizard to configure your provider:
Configure your Ollama/OpenRouter/etc. settings when prompted. If the screen looks empty after attaching, press Enter to trigger TUI redraw. If using Ollama on the host with the nginx proxy, make sure Ollama listens on
WORKSPACE_DIR="$PWD" docker compose -f $HOME/godex/docker-compose.yml up
0.0.0.0:11434(not just127.0.0.1), e.g.OLLAMA_HOST=0.0.0.0:11434 ollama serve.
If you want Ollama bound to 0.0.0.0 use:
sudo mkdir -p /etc/systemd/system/ollama.service.d
sudo tee /etc/systemd/system/ollama.service.d/override.conf >/dev/null <<'EOF'
[Service]
Environment="OLLAMA_HOST=0.0.0.0:11434"
EOF
sudo systemctl daemon-reload
sudo systemctl restart ollama
- Subsequent runs - Your config is persisted in
$HOME/.godex:WORKSPACE_DIR="$PWD" docker compose -f $HOME/godex/docker-compose.yml up -d && docket attach godex
Note: WORKSPACE_DIR controls which host directory is mounted at /workspace in the container. Set it to the directory you want GoDex to operate in (defaults to the compose file directory if unset).
- Edit provider config in
$HOME/.godex:nano $HOME/.godex/providers.yaml vi $HOME/.godex/providers.yaml vim $HOME/.godex/providers.yaml
The sandbox includes:
- Python 3, pip, pytest, black, flake8
- Node.js, npm
- Go, Rust
- Git, curl, wget
- Build tools: make, cmake, gcc, g++
- Utilities: htop, tree, jq, ripgrep, fd, fzf, vim, nano
- GoDex can only access files within the
./workspacedirectory (read-write) - Container runs as non-root user (set via
USER_ID/GROUP_ID, defaults to1000:1000) - Most Linux capabilities dropped; only
NET_RAWandNET_BIND_SERVICEallowed - No new privileges allowed
/tmpand/runuse tmpfs (memory-only, non-persistent)- No explicit process/file limits (inherits host defaults)
- Network isolated via nginx proxy (host port
11435forwards toollama-proxy:11434, which proxies to host11434) - Provider credentials are stored in
$HOME/.godex - Use
docker compose -f $HOME/godex/docker-compose.yml down -vto completely remove all data
If you get an error like {"error":"model 'qwen3-coder-next:cloud' not found"}, it means the model hasn't been pulled yet. Run:
ollama pull <model-name>Then test it works with:
ollama run <model-name>Make sure Ollama is running in the background. You can start it with:
ollama serveIf GoDex can't connect to Ollama, check that the Ollama API is accessible at http://localhost:11434.
For developers: DEV.md - Guide to adding new MCP servers and providers