Skip to content

Unable to Use Local LLM with .env Configuration — Fails with "Failed to generate prediction with any model" #2098

@talentam

Description

@talentam

Git provider (optional)

Gitlab

System Info (optional)

No response

Issues details

I’m trying to configure the PR Agent to use my local LLM (which is OpenAI-compatible) via the .env file, but the agent still attempts to use default models (gpt-5-2025-08-07, o4-mini) instead of my specified endpoint.

.env Configuration:
`CONFIG__GIT_PROVIDER=gitlab
GITLAB__PERSONAL_ACCESS_TOKEN=glpat-xxxx
GITLAB__SHARED_SECRET=xxx
GITLAB__URL=http://96.0.56.102/
GITLAB__AUTH_TYPE=private_token

OPENAI__KEY=sk-xxxx
OPENAI__API_BASE=http://96.0.78.166:30010/v1
OPENAI__API_TYPE=qwen3-235b`

Docker Command:
docker run -d --name pr-agent -p 8080:3000 --env-file .env gitlab_pr_agent:latest

Observed Error:
{"text": "Failed to generate code suggestions for PR, error: Failed to generate prediction with any model of ['gpt-5-2025-08-07', 'o4-mini']\n"

Despite setting OPENAI__API_BASE, OPENAI__API_TYPE and OPENAI__KEY, the system appears to ignore these and falls back to hardcoded model names

Questions:
Is there an additional configuration needed to override the default model list?
Does OPENAI__API_TYPE correctly map to the model name used in requests, or is there a separate MODEL environment variable?
How can I ensure the agent uses my OpenAI-compatible endpoint instead of defaulting to OpenAI’s model names?

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions