Description
Query/Question
How to properly handle errors on OpenAI client ?
I'm using OpenAIClient.getChatCompletions()
. I noticed that different types of errors will generate HttpResponseException. I`m trying to understand what is the best approach to properly parse/handle those errors.
For instance, the same HttpResponseException is thrown when the request exceeds the model length and when the requests is filtered by the open ai content filter. I have very different approaches for those errors. For model length limit I want to retry the request with a different model, for content filter I want to reply a polite message to my user.
How can I differentiate what is the actual error causing the exception ? Is there anything on the SDK that can help me parse the error ?
Maybe some sample code with error handling would help
Why is this not a Bug or a feature Request?
Not a bug, just need to guidance on how to handle errors properly.
Setup (please complete the following information if applicable):
- Library/Libraries: com.azure:azure-ai-openai:1.0.0-beta.8