Skip to content

Conversation

@MalyalaKarthik66
Copy link
Contributor

This PR fixes incorrect NaN propagation in the OpenVINO backend for arctan2.

Fixes #22061

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @MalyalaKarthik66, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical issue in the OpenVINO backend where the arctan2 function failed to correctly propagate NaN (Not-a-Number) values. The changes introduce robust NaN handling, ensuring that the output of arctan2 is NaN whenever one or both of its inputs are NaN. This enhancement improves the numerical stability and correctness of computations within the OpenVINO integration, preventing unexpected results from invalid inputs.

Highlights

  • NaN Propagation Fix: Implemented explicit NaN detection and propagation logic within the arctan2 function in the OpenVINO backend to ensure that if any input is NaN, the output is also NaN.
  • OpenVINO Backend arctan2 Refinement: Modified the arctan2 implementation in keras/src/backend/openvino/numpy.py to correctly handle NaN inputs, aligning its behavior with standard numerical expectations.
  • Test Case Addition: Added a new test case, test_arctan2_nan_propagation, to keras/src/ops/numpy_test.py to validate the correct NaN propagation behavior of arctan2.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@codecov-commenter
Copy link

codecov-commenter commented Jan 27, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 82.80%. Comparing base (b0de8d9) to head (81f8be7).

Additional details and impacted files
@@           Coverage Diff           @@
##           master   #22064   +/-   ##
=======================================
  Coverage   82.80%   82.80%           
=======================================
  Files         592      592           
  Lines       62463    62468    +5     
  Branches     9783     9783           
=======================================
+ Hits        51723    51728    +5     
  Misses       8210     8210           
  Partials     2530     2530           
Flag Coverage Δ
keras 82.63% <100.00%> (+<0.01%) ⬆️
keras-jax 62.39% <0.00%> (-0.01%) ⬇️
keras-numpy 56.48% <0.00%> (-0.01%) ⬇️
keras-openvino 37.63% <100.00%> (+<0.01%) ⬆️
keras-tensorflow 63.64% <0.00%> (-0.01%) ⬇️
keras-torch 62.42% <0.00%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses an issue with NaN propagation in the OpenVINO backend for the arctan2 function. The changes involve adding NaN checks for the input tensors x1 and x2, creating a NaN mask, and using this mask to selectively output NaN values when either input is NaN. A new test case is also added to verify the correct NaN propagation.

Comment on lines +402 to +404
# Generate NaN safely for all floating dtypes (including bf16)
nan_value = ov_opset.divide(zero_const, zero_const)
final_out = ov_opset.select(nan_mask, nan_value, value_out)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Dividing by zero can be unsafe. While this is intended to generate NaN, it might be better to use ov_opset.constant(float('nan'), ov_type) directly if OpenVINO supports it for all floating-point types, as it would be more explicit and potentially more efficient. If it doesn't support it for all floating point types, then the current approach is fine, but it would be good to add a comment explaining why this approach is necessary to generate NaN safely for all floating-point types, including bf16.

neg_half_pi = ov_opset.constant(-float(np.pi / 2), ov_type)
zero_const = ov_opset.constant(0.0, ov_type)

cond_x2_gt0 = ov_opset.greater(x2, zero_const).output(0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The .output(0) call is unnecessary here. The ov_opset.greater op already returns an Output object, so you can remove this to simplify the code.

Suggested change
cond_x2_gt0 = ov_opset.greater(x2, zero_const).output(0)
cond_x2_gt0 = ov_opset.greater(x2, zero_const)
cond_x2_lt0 = ov_opset.less(x2, zero_const)
References
  1. Design end-to-end workflows, not individual functions and classes. Features only exist to support a workflow. (link)

cond_x2_gt0 = ov_opset.greater(x2, zero_const)
cond_x2_lt0 = ov_opset.less(x2, zero_const)

cond_x1_ge0 = ov_opset.greater_equal(x1, zero_const).output(0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The .output(0) call is unnecessary here. The ov_opset.greater_equal op already returns an Output object, so you can remove this to simplify the code.

Suggested change
cond_x1_ge0 = ov_opset.greater_equal(x1, zero_const).output(0)
cond_x1_ge0 = ov_opset.greater_equal(x1, zero_const)
cond_x1_gt0 = ov_opset.greater(x1, zero_const)
cond_x1_eq0 = ov_opset.equal(x1, zero_const)
References
  1. Design end-to-end workflows, not individual functions and classes. Features only exist to support a workflow. (link)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

layers.TimeDistributed produce non-nan results for nan inputs with openvino backend

3 participants