Skip to content

Add fmod to keras.ops.numpy#22369

Open
rstar327 wants to merge 7 commits intokeras-team:masterfrom
rstar327:add-fmod-op
Open

Add fmod to keras.ops.numpy#22369
rstar327 wants to merge 7 commits intokeras-team:masterfrom
rstar327:add-fmod-op

Conversation

@rstar327
Copy link
Contributor

@rstar327 rstar327 commented Mar 6, 2026

Add keras.ops.fmod — the C-style remainder of division (truncation toward zero).

Unlike keras.ops.mod which returns a remainder with the same sign as the divisor, fmod returns a remainder with the same sign as the dividend, matching the behavior of C's fmod and numpy.fmod.

Implements across all backends (numpy, jax, tensorflow, torch). OpenVINO raises NotImplementedError.

Includes dynamic shape, static shape, correctness, and dtype tests.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the keras.ops.numpy module by introducing a new fmod function. This function provides a C-style remainder of division, ensuring the result's sign matches the dividend, which is a key distinction from the existing mod operation. The change improves the mathematical utility of Keras operations by offering a specific remainder behavior, and it is consistently implemented across major backends while clearly indicating unsupported platforms.

Highlights

  • New fmod operation added: Introduced keras.ops.fmod, which computes the C-style remainder of division, where the result has the same sign as the dividend. This behavior differs from keras.ops.mod.
  • Multi-backend implementation: The fmod operation has been implemented across NumPy, JAX, TensorFlow, and Torch backends to ensure consistent behavior.
  • OpenVINO exclusion: The OpenVINO backend explicitly raises a NotImplementedError for fmod, and corresponding tests have been excluded.
  • Comprehensive testing: New tests were added for fmod covering dynamic shape, static shape, correctness, and dtype handling to ensure robustness.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • keras/api/_tf_keras/keras/ops/numpy/init.py
    • Imported the new fmod function.
  • keras/api/ops/numpy/init.py
    • Imported the new fmod function.
  • keras/src/backend/jax/numpy.py
    • Implemented the fmod function using JAX's jnp.fmod.
  • keras/src/backend/numpy/numpy.py
    • Implemented the fmod function using NumPy's np.fmod with explicit type casting.
  • keras/src/backend/openvino/excluded_concrete_tests.txt
    • Added fmod related tests to the list of excluded tests for the OpenVINO backend.
  • keras/src/backend/openvino/numpy.py
    • Added a placeholder fmod function that raises a NotImplementedError.
  • keras/src/backend/tensorflow/numpy.py
    • Implemented the fmod function using TensorFlow operations to replicate C-style remainder behavior.
  • keras/src/backend/torch/numpy.py
    • Implemented the fmod function using PyTorch's torch.fmod with explicit type casting.
  • keras/src/ops/numpy.py
    • Defined the Fmod operation class and the fmod function, including its documentation and keras_export decorators.
  • keras/src/ops/numpy_test.py
    • Added new test cases for fmod covering dynamic shape, static shape, correctness, and dtype handling.
Activity
  • No specific activity (comments, reviews, progress) has been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces keras.ops.fmod, a valuable addition for consistency with NumPy's API. The implementation across different backends is well-structured, and the accompanying tests are thorough. I have a couple of suggestions to enhance the PR: one is to include a code example in the fmod docstring as per the Keras API design guidelines, and the other is to address a correctness issue in the TensorFlow backend implementation to ensure it correctly handles integer dtypes.

Note: Security Review did not run due to the size of the PR.

Comment on lines +2179 to +2181
quotient = x1 / x2
truncated = tf.sign(quotient) * tf.math.floor(tf.math.abs(quotient))
return x1 - truncated * x2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current implementation can be simplified by using tf.trunc. More importantly, it returns a float tensor for integer inputs, which is inconsistent with other backends and numpy.fmod. The result should be cast back to the original integer dtype if the inputs were integers.

Suggested change
quotient = x1 / x2
truncated = tf.sign(quotient) * tf.math.floor(tf.math.abs(quotient))
return x1 - truncated * x2
quotient = x1 / x2
result = x1 - tf.trunc(quotient) * x2
if "int" in dtype:
return tf.cast(result, dtype)
return result

Comment on lines +5288 to +5290
Returns:
Output tensor, element-wise remainder with truncation.
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Per the Keras API design guidelines, all docstrings should include code examples. Adding an example here would help users understand the function's behavior, especially its difference from keras.ops.mod.

    Returns:
        Output tensor, element-wise remainder with truncation.

    Examples:
    >>> x1 = keras.ops.array([-3., -2., -1., 1., 2., 3.])
    >>> x2 = keras.ops.array([2., 2., 2., 2., 2., 2.])
    >>> keras.ops.fmod(x1, x2)
    array([-1., -0., -1.,  1.,  0.,  1.], dtype=float32)

    >>> x1 = keras.ops.array([1, 2, 3, 4, 5])
    >>> x2 = keras.ops.array([-2, -2, -2, -2, -2])
    >>> keras.ops.fmod(x1, x2)
    array([1, 0, 1, 0, 1], dtype=int32)
    """
References
  1. All docstrings should include code examples. (link)

@rstar327 rstar327 closed this Mar 6, 2026
@rstar327 rstar327 reopened this Mar 6, 2026
@codecov-commenter
Copy link

codecov-commenter commented Mar 6, 2026

Codecov Report

❌ Patch coverage is 79.62963% with 11 lines in your changes missing coverage. Please review.
✅ Project coverage is 82.96%. Comparing base (07a1ec1) to head (ef470ad).
⚠️ Report is 3 commits behind head on master.

Files with missing lines Patch % Lines
keras/src/backend/torch/numpy.py 62.50% 2 Missing and 1 partial ⚠️
keras/src/backend/numpy/numpy.py 77.77% 1 Missing and 1 partial ⚠️
keras/src/backend/tensorflow/numpy.py 83.33% 1 Missing and 1 partial ⚠️
keras/src/ops/numpy.py 87.50% 1 Missing and 1 partial ⚠️
keras/api/_tf_keras/keras/ops/__init__.py 0.00% 1 Missing ⚠️
keras/api/_tf_keras/keras/ops/numpy/__init__.py 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master   #22369      +/-   ##
==========================================
+ Coverage   82.93%   82.96%   +0.02%     
==========================================
  Files         595      596       +1     
  Lines       66083    66323     +240     
  Branches    10312    10326      +14     
==========================================
+ Hits        54808    55025     +217     
- Misses       8659     8674      +15     
- Partials     2616     2624       +8     
Flag Coverage Δ
keras 82.79% <79.62%> (+0.02%) ⬆️
keras-jax 60.82% <42.59%> (+0.01%) ⬆️
keras-numpy 55.03% <48.14%> (+0.02%) ⬆️
keras-openvino 49.02% <20.37%> (-0.07%) ⬇️
keras-tensorflow 62.05% <53.70%> (+0.01%) ⬆️
keras-torch 60.86% <44.44%> (+0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Collaborator

@hertschuh hertschuh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding this!

Comment on lines +2179 to +2182
quotient = x1 / x2
truncated = tf.sign(quotient) * tf.math.floor(tf.math.abs(quotient))
truncated = tf.cast(truncated, dtype)
return x1 - truncated * x2
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this would be a faster implementation:

return tf.sign(x1) * tf.math.floormod(tf.abs(x1), tf.abs(x2))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants