Skip to content

Chore/return model in response #982

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 12 commits into from
Apr 11, 2025

Conversation

narengogi
Copy link
Collaborator

@narengogi narengogi commented Mar 11, 2025

Code Quality new feature

Author Description

🔄 What Changed

  • The Pull Request introduces changes to return the model in responses for various providers, including Bedrock, Anthropic, and others. It modifies the response handling functions to include the model in the response structure.
  • Several functions have been updated to accept a new parameter gatewayRequest, which contains the model information.

🔍 Impact of the Change

  • This change enhances the response structure by including the model used for generating the response, which can be beneficial for debugging and tracking the model's performance.
  • It ensures that all providers return consistent information regarding the model used, improving the overall API usability.

📁 Total Files Changed

  • 20 files were modified in this Pull Request.

🧪 Test Added

  • No new tests were added in this Pull Request. It is recommended to add unit tests to verify the new functionality of returning the model in responses.

🔒 Security Vulnerabilities

  • No security vulnerabilities were detected in this Pull Request.

Title: Return model in API responses

🔄 What Changed

  • Added functionality to return the model information in API responses across various providers including Bedrock, Anthropic, Cohere, and others
  • Modified response handling functions to include a new parameter gatewayRequest which contains model information
  • Updated both streaming and non-streaming response handlers to include model information in the response structure
  • Ensured consistency in how model information is returned across different providers

🔍 Impact of the Change

  • Enhances response structure by including the model used for generating responses
  • Improves debugging capabilities by allowing tracking of which model was used for each request
  • Provides more consistent API responses across all providers
  • Makes it easier to analyze model performance and behavior in production

📁 Total Files Changed

  • 20 files were modified, primarily focusing on response transformation functions

🧪 Test Added

  • No new tests were added in this Pull Request

🔒 Security Vulnerabilities

  • No security vulnerabilities were detected in this Pull Request

Quality Recommendations

  1. Add unit tests to verify the model information is correctly included in responses for each provider

  2. Consider adding TypeScript interface updates to formally document the new response structure

  3. Add JSDoc comments to explain the purpose of the new gatewayRequest parameter in functions

@narengogi narengogi marked this pull request as ready for review March 11, 2025 12:50
@narengogi narengogi requested review from VisargD and b4s36t4 March 11, 2025 12:51
@narengogi narengogi self-assigned this Mar 11, 2025
@b4s36t4
Copy link
Contributor

b4s36t4 commented Mar 21, 2025

/matter summary

Copy link

I'm generating a summary for this PR. I'll post it shortly.

Copy link

Chore/return model in response

🔄 What Changed

  • The responseHandler function in responseHandlers.ts was modified to include a new parameter gatewayRequest.
  • The readAWSStream and readStream functions in streamHandler.ts were updated to accept the gatewayRequest parameter.
  • Multiple response transformation functions across various providers (e.g., AI21, Bedrock, Cohere) were updated to include the gatewayRequest parameter, allowing for better handling of model information.

🔍 Impact of the Change

  • This change enhances the flexibility of the response handling by allowing the model information to be passed through the gatewayRequest parameter, which can improve the accuracy of responses generated by different AI providers.

📁 Total Files Changed

  • 12 files were modified in this pull request, with a total of 380 lines changed (additions and deletions combined).

🧪 Test Added

  • No new tests were added in this pull request. It is recommended to add unit tests to verify the behavior of the modified functions, especially focusing on the handling of the new gatewayRequest parameter.

🔒 Security Vulnerabilities

  • No security vulnerabilities were detected in this pull request.

@b4s36t4
Copy link
Contributor

b4s36t4 commented Mar 21, 2025

/matter review

Copy link

I'm reviewing this PR now. I'll provide feedback shortly.

Copy link

@matter-code-review matter-code-review bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The changes look good overall, but please consider adding unit tests to ensure the new functionality is covered.

@b4s36t4
Copy link
Contributor

b4s36t4 commented Mar 21, 2025

/matter explain

Copy link

This Pull Request introduces several modifications primarily focused on enhancing the response handling for various AI providers by including the model in the responses. The changes span multiple files and involve both non-streaming and streaming response handlers.

  1. Response Handlers Update: The responseHandlers.ts file has been modified to include a new parameter gatewayRequest in the responseHandler function. This allows the function to access the model information from the request, which is crucial for generating accurate responses.

  2. Stream Handlers Update: The streamHandler.ts file has also been updated to include the gatewayRequest parameter in several functions, such as readAWSStream and readStream. This change ensures that the model information is consistently passed through the streaming process, allowing for more accurate response generation.

  3. Provider-Specific Changes: Each provider's response transformation functions (e.g., AI21ChatCompleteResponseTransform, BedrockChatCompleteResponseTransform, etc.) have been updated to accept the gatewayRequest parameter. This allows the model to be included in the response object, which is essential for tracking which model was used for generating the response.

  4. Error Handling: The error response transformations have also been updated to include the model in the error responses, ensuring that even in failure cases, the model information is available for debugging and logging purposes.

  5. New Dependencies: The Params type has been imported in several files to ensure that the gatewayRequest parameter is correctly typed, enhancing type safety and reducing potential runtime errors.

  6. Testing: While the PR does not explicitly mention new tests, the extensive changes to the response handling logic imply that existing tests may need to be updated to account for the new parameters and ensure that the model information is correctly handled in both success and error scenarios.

Overall, this PR significantly enhances the response handling capabilities of the application by ensuring that the model information is consistently included across various AI providers, improving the traceability and accuracy of responses.

@narengogi
Copy link
Collaborator Author

@VisargD @b4s36t4 I've verified and added model from response where available

@narengogi narengogi requested a review from VisargD April 9, 2025 07:25
@VisargD VisargD merged commit f2ab09f into Portkey-AI:main Apr 11, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants