Replies: 5 comments 1 reply
-
|
Hi @davidames You're absolutely right to flag this, this is issue between model responses and SDK expectation and model architectures. Does Microsoft support using SK / ME.AI / Azure.AI.OpenAI with Foundry-hosted models like DeepSeek? Yes, but with caveats. Microsoft explicitly promotes Foundry as a unified access layer for OSS and proprietary models, and frameworks like Semantic Kernel (SK), ME.AI, and Azure.AI.OpenAI are designed to work with Foundry endpoints. However:
So while support exists in principle, in practice it’s not yet robust across all models, especially OSS ones with looser schema adherence. There are two key repos depending on where the fix should land:
Given that DeepSeek is returning a non-standard How are others working around this? From Discussion #89 and similar threads:
Suggested Path Forward Here's a cleaner workaround than rewriting JSON strings: public static ChatFinishReason ToChatFinishReasonSafe(string value)
{
return value.ToLowerInvariant() switch
{
"tool_calls" => ChatFinishReason.ToolCalls,
"tool_call" => ChatFinishReason.ToolCalls, // normalize singular to plural
"function_call" => ChatFinishReason.FunctionCall,
_ => throw new ArgumentOutOfRangeException(nameof(value), value, "Unknown ChatFinishReason value.")
};
}You could inject this into your deserialization pipeline or patch the SDK locally while waiting for upstream fixes. |
Beta Was this translation helpful? Give feedback.
-
|
Thank you @leestott for that well-considered answer. I think I understand the situation now - where Foundry hosts 3rd party models, Foundry is basically passing the request/response pairs as-is - it's not like you are doing any real transformation of payloads inside of Foundry. RE: Your suggestion - from a practical POV, how do I inject something like ToChatFinishReasonSafe into my deserialization pipeline, assuming I'm using something like M.E.AI or SK or something else that leverages the underling OpenAI Dotnet SDK? These classes seem to be very "sealed", "closed" and hard to modify, apart from at the HTTP layer. dave |
Beta Was this translation helpful? Give feedback.
-
|
@davidames Are you finding this issue when using the OpenAI SDK for .NET directly with Azure AI Foundry Models with an endpoint using the latest OpenAI /v1 route API defined here: https://learn.microsoft.com/en-us/azure/ai-foundry/openai/latest (https://learn.microsoft.com/en-us/azure/ai-foundry/openai/api-version-lifecycle?tabs=key)? If yes, could you please file an issue in the OpenAI repo https://github.com/openai/openai-dotnet/issues? cc @brandom-msft |
Beta Was this translation helpful? Give feedback.
-
|
@m-nash, @JoshLove-msft, will the JsonPatch feature handle this case? Logically, this is equivalent to a type of a property changing. |
Beta Was this translation helpful? Give feedback.
-
|
@KrzysztofCwalina I don't know exactly which codebase the screenshot of ChatFinishReason.Serialization.cs came from but JsonPatch might help in some scenarios. If |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I don't really know where this issue sits from a team preservice. The way I'm looking at it is that MS are offering access to models such as DeepSeek via Foundry and they offer frameworks such as SK, ME.AI and Azure.AI.OpenAI to access Foundry so the assumption is that they should work together.
The problem is the above MS frameworks app depend on the OpenAI Nuget package. OpenAI defines the valid ChatFinishReason's as an enum with
tool_calls(plural) being a valid value.DeepSeek-R1-0528 on Foundry returns 'toolcall' (singular) as the ChatFinishReason, and that causes the exception
in all of the above packages.
Here is a trace of the offending response
Here is a trace of a valid response using a different model
The difference is finish_reason=toolcall vs finish_reason=tool_calls
I can work around this by writing my own HttpClientHandler and do a string replacement to fix the JSON that comes back from Foundry for DeepSeek, but this feels pretty gross.
Questions:
These sorts of minor incompatibilities happen in other places too, such as https://github.com/orgs/azure-ai-foundry/discussions/89
Beta Was this translation helpful? Give feedback.
All reactions