[fix] run_response.model: update from actual provider response model name after LLM call#6963
Open
NIK-TIGER-BILL wants to merge 2 commits intoagno-agi:mainfrom
Open
Conversation
…name run_response.model was preset from agent.model.id before the LLM call and never updated afterwards. When a LiteLLM router/fallback switches to a different model, the actual model used is in the API response object (response.model), but it was ignored. Callers could not determine which model was actually used. Fix: 1. In OpenAIChat._parse_provider_response(), capture response.model in model_response.provider_data['model'] when present. 2. In update_run_response(), if provider_data contains a 'model' key, update run_response.model so the value reflects the real model used. Closes agno-agi#6921
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Problem
run_response.modelis set toagent.model.idbefore the LLM call and never updated afterwards (closes #6921).When a LiteLLM router/proxy switches models via a fallback (e.g. DeepSeek → Qwen), the API response object's
modelfield contains the actual model name — but it was silently ignored. Callers who readrun_output.modelto identify the model that was used see the originally configured model name, not the one that actually served the request.Fix
1. In
OpenAIChat._parse_provider_response(), saveresponse.modelintomodel_response.provider_data['model']when it is present:2. In
update_run_response(), updaterun_response.modelfromprovider_data['model']if available:Closes #6921