Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: allow azure openai to pass custom uri #920

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

umuthopeyildirim
Copy link

Title:
feat: allow azure openai to pass custom uri

Description: (optional)
This pull request introduces support for a custom Azure URI for OpenAI requests. The following changes have been made:

Header Handling:
Updated constructConfigFromRequestHeaders to extract the azureCustomUri from the incoming request headers. This enables the passing of a custom URI from the client.

Provider Configuration:
Modified AzureOpenAIAPIConfig in src/providers/azure-openai/api.ts to use the azureCustomUri when constructing the base URL for API requests.
Updated the endpoint resolution logic so that if a custom URI is provided, the URL structure is adapted accordingly (e.g., DeepSeek-R1 on Azure uses {resourceName}.services.ai.azure.com/models).

Type Definitions:
Extended the Options interface in src/types/requestBody.ts to include the azureCustomUri field.

Motivation: (optional)
This enhancement is necessary to support scenarios where a non-standard Azure endpoint is required (such as DeepSeek-R1). By allowing users to pass a custom URI, we provide greater flexibility and ensure compatibility with alternative endpoint configurations.

Related Issues: (optional)

  • #issue-number

@VisargD
Copy link
Collaborator

VisargD commented Feb 7, 2025

This enhancement is necessary to support scenarios where a non-standard Azure endpoint is required (such as DeepSeek-R1). By allowing users to pass a custom URI, we provide greater flexibility and ensure compatibility with alternative endpoint configurations.

Hey @umuthopeyildirim, DeepSeek-R1 (or any other models) are already supported as a part of the azure-ai provider. azure-openai is specifically used for Azure OpenAI services. Here is an example to use azure-ai provider:

curl --request POST \
  --url <gateway_url>/v1/chat/completions \
  --header 'Authorization: API_KEY' \
  --header 'content-type: application/json' \
  --header 'x-portkey-azure-api-version: 2024-05-01-preview' \
  --header 'x-portkey-azure-deployment-name: Llama-2-7b-chat-cgnkg' \
  --header 'x-portkey-azure-deployment-type: serverless' \
  --header 'x-portkey-azure-region: eastus' \
  --header 'x-portkey-provider: azure-ai' \
  --data '{
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful assistant"
    },
    {
      "role": "user",
      "content": "Hello?"
    }
  ]
}'

@umuthopeyildirim
Copy link
Author

Hey @VisargD Yes I'm aware of Azure AI Inference but subdomains still don't match + Azure is now deploying DeepSeek-R1 under Azure AI Services. My endpoint for Deepseek is: .services.ai.azure.com
Screenshot 2025-02-07 at 11 24 59 AM

@narengogi
Copy link
Collaborator

narengogi commented Feb 12, 2025

Hey @umuthopeyildirim I think you could add the same check in the azure-ai provider itself like @VisargD mentioned

I've noticed documentation for this provider is missing, tagging @b4s36t4 who wrote the azure-ai integration to update the docs, and suggest what changes could be made to support deepseek

@narengogi narengogi requested a review from b4s36t4 February 12, 2025 07:02
@b4s36t4
Copy link
Contributor

b4s36t4 commented Feb 14, 2025

Hey, @umuthopeyildirim. Thanks for pointing out the issue and the PR.

Although the solution you've proposed is good but I don't think this is applicable for azure-openai. The specific endpoint is only available and can only be created from Azure AI Foundry.

I will create a new PR pointing with your changes, but it's gonna be provider for auzre-ai.

The problem you've faced to integrated Deepseek with azure is not a real problem, the endpoint azure giving is of provider auzre-ai.
you can take a look at the implementation here

if (azureDeploymentType === 'serverless') {
return `https://${azureDeploymentName?.toLowerCase()}.${azureRegion}.models.ai.azure.com`;
}
.

https://learn.microsoft.com/en-us/azure/ai-foundry/model-inference/concepts/endpoints?tabs=rest A bit more about the problem that relates to azure-ai provider.

Please share your thoughts if I seem diverted from the problem or so.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants