Skip to content

Full Gemini CLI LLM gateway support: .env injection and /v1beta/* gateway path #5173

@yrobla

Description

@yrobla

Context

PR #5149 (Add Gemini CLI support to thv llm setup) ships partial support for Gemini CLI. Two blockers prevent end-to-end proxy routing and were intentionally deferred:


Blocker 1 — env var injection via settings.json does not work

GEMINI_API_KEY and GOOGLE_GEMINI_BASE_URL are read by Gemini CLI straight from process.env (see packages/core/src/core/contentGenerator.ts). They are not wired from settings.json.

Gemini CLI's settings.schema.json declares additionalProperties: false at the root and has no top-level env field. The only env field is at $defs.MCPServerConfig.properties.env, which injects env vars into MCP server child processes — not into Gemini CLI itself.

As a result, the current implementation writes security.auth.selectedType to settings.json (correct and effective) but cannot automate the env var configuration. Users must currently add the following manually to ~/.gemini/.env (Gemini CLI loads $GEMINI_DIR/.env and cwd/.env):

GEMINI_API_KEY=thv-proxy
GOOGLE_GEMINI_BASE_URL=<proxy-origin>   # e.g. http://localhost:14000

Proposed fix

Extend clientAppConfig with a .env-file mechanism — a new field listing key/value pairs to write to a nominated dotenv file (e.g. ~/.gemini/.env) alongside the existing JSON-pointer patching. thv llm setup and thv llm teardown would manage this file the same way they manage settings.json entries today.


Blocker 2 — gateway endpoint mismatch

The Envoy AI Gateway currently only exposes an OpenAI-compatible endpoint (/v1/chat/completions). Gemini CLI sends requests to /v1beta/models/{model}:generateContent (the native Gemini API format), which the gateway rejects.

Even with env vars correctly injected (Blocker 1 fixed), requests from Gemini CLI would be rejected upstream.

Proposed fix

Add a Google/Gemini API format backend to the Envoy AI Gateway configuration so that /v1beta/* requests from Gemini CLI can be proxied and served end-to-end.


Acceptance criteria

  • thv llm setup --client gemini-cli writes GEMINI_API_KEY and GOOGLE_GEMINI_BASE_URL to ~/.gemini/.env automatically
  • thv llm teardown --client gemini-cli removes those entries from ~/.gemini/.env
  • The gateway serves /v1beta/models/{model}:generateContent requests from Gemini CLI
  • End-to-end test: Gemini CLI routes through the local proxy to the upstream gateway without manual steps

Related

Metadata

Metadata

Assignees

Labels

cliChanges that impact CLI functionalityenhancementNew feature or requestgoPull requests that update go codellm gatewayLLM gateway authentication feature

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions