Skip to content

[fix] run_response.model: update from actual provider response model name after LLM call#6963

Open
NIK-TIGER-BILL wants to merge 2 commits intoagno-agi:mainfrom
NIK-TIGER-BILL:fix/run-response-model-from-actual-response
Open

[fix] run_response.model: update from actual provider response model name after LLM call#6963
NIK-TIGER-BILL wants to merge 2 commits intoagno-agi:mainfrom
NIK-TIGER-BILL:fix/run-response-model-from-actual-response

Conversation

@NIK-TIGER-BILL
Copy link
Contributor

Problem

run_response.model is set to agent.model.id before the LLM call and never updated afterwards (closes #6921).

When a LiteLLM router/proxy switches models via a fallback (e.g. DeepSeek → Qwen), the API response object's model field contains the actual model name — but it was silently ignored. Callers who read run_output.model to identify the model that was used see the originally configured model name, not the one that actually served the request.

Fix

1. In OpenAIChat._parse_provider_response(), save response.model into model_response.provider_data['model'] when it is present:

if getattr(response, 'model', None):
    model_response.provider_data['model'] = response.model

2. In update_run_response(), update run_response.model from provider_data['model'] if available:

_actual_model = model_response.provider_data.get('model')
if _actual_model:
    run_response.model = _actual_model

Closes #6921

…name

run_response.model was preset from agent.model.id before the LLM call and
never updated afterwards. When a LiteLLM router/fallback switches to a
different model, the actual model used is in the API response object
(response.model), but it was ignored. Callers could not determine which
model was actually used.

Fix:
1. In OpenAIChat._parse_provider_response(), capture response.model in
   model_response.provider_data['model'] when present.
2. In update_run_response(), if provider_data contains a 'model' key,
   update run_response.model so the value reflects the real model used.

Closes agno-agi#6921
@NIK-TIGER-BILL NIK-TIGER-BILL requested a review from a team as a code owner March 11, 2026 18:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

The model name output by run_output.model does not match the model name actually returned by litellm

2 participants