-
Notifications
You must be signed in to change notification settings - Fork 19.7k
Add fmod to keras.ops.numpy #22369
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Add fmod to keras.ops.numpy #22369
Changes from all commits
990588c
4d2c73d
fed4e4a
3efcb9f
78280a2
14a3800
ef470ad
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -2168,6 +2168,20 @@ def mod(x1, x2): | |
| return tf.math.mod(x1, x2) | ||
|
|
||
|
|
||
| def fmod(x1, x2): | ||
| x1 = convert_to_tensor(x1) | ||
| x2 = convert_to_tensor(x2) | ||
| dtype = dtypes.result_type(x1.dtype, x2.dtype) | ||
| if dtype == "bool": | ||
| dtype = "int32" | ||
| x1 = tf.cast(x1, dtype) | ||
| x2 = tf.cast(x2, dtype) | ||
| quotient = x1 / x2 | ||
| truncated = tf.sign(quotient) * tf.math.floor(tf.math.abs(quotient)) | ||
| truncated = tf.cast(truncated, dtype) | ||
| return x1 - truncated * x2 | ||
|
Comment on lines
+2179
to
+2182
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think this would be a faster implementation: return tf.sign(x1) * tf.math.floormod(tf.abs(x1), tf.abs(x2)) |
||
|
|
||
|
|
||
| def moveaxis(x, source, destination): | ||
| x = convert_to_tensor(x) | ||
|
|
||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -5255,6 +5255,44 @@ def mod(x1, x2): | |
| return backend.numpy.mod(x1, x2) | ||
|
|
||
|
|
||
| class Fmod(Operation): | ||
| def call(self, x1, x2): | ||
| return backend.numpy.fmod(x1, x2) | ||
|
|
||
| def compute_output_spec(self, x1, x2): | ||
| x1_shape = getattr(x1, "shape", []) | ||
| x2_shape = getattr(x2, "shape", []) | ||
| output_shape = broadcast_shapes(x1_shape, x2_shape) | ||
| output_dtype = dtypes.result_type( | ||
| getattr(x1, "dtype", type(x1)), | ||
| getattr(x2, "dtype", type(x2)), | ||
| ) | ||
| if output_dtype == "bool": | ||
| output_dtype = "int32" | ||
| return KerasTensor(output_shape, dtype=output_dtype) | ||
|
|
||
|
|
||
| @keras_export(["keras.ops.fmod", "keras.ops.numpy.fmod"]) | ||
| def fmod(x1, x2): | ||
| """Returns the element-wise remainder of division with truncation. | ||
|
|
||
| Computes the remainder complementary to the `floor_divide` function, | ||
| equivalent to the C library function ``fmod``. The result has the same | ||
| sign as the dividend ``x1``. This is different from `keras.ops.mod` | ||
| which has the same sign as the divisor ``x2``. | ||
|
|
||
| Args: | ||
| x1: First tensor, the dividend. | ||
| x2: Second tensor, the divisor. | ||
|
|
||
| Returns: | ||
| Output tensor, element-wise remainder with truncation. | ||
| """ | ||
|
Comment on lines
+5288
to
+5290
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Per the Keras API design guidelines, all docstrings should include code examples. Adding an example here would help users understand the function's behavior, especially its difference from Returns:
Output tensor, element-wise remainder with truncation.
Examples:
>>> x1 = keras.ops.array([-3., -2., -1., 1., 2., 3.])
>>> x2 = keras.ops.array([2., 2., 2., 2., 2., 2.])
>>> keras.ops.fmod(x1, x2)
array([-1., -0., -1., 1., 0., 1.], dtype=float32)
>>> x1 = keras.ops.array([1, 2, 3, 4, 5])
>>> x2 = keras.ops.array([-2, -2, -2, -2, -2])
>>> keras.ops.fmod(x1, x2)
array([1, 0, 1, 0, 1], dtype=int32)
"""References
|
||
| if any_symbolic_tensors((x1, x2)): | ||
| return Fmod().symbolic_call(x1, x2) | ||
| return backend.numpy.fmod(x1, x2) | ||
|
|
||
|
|
||
| class Moveaxis(Operation): | ||
| def __init__(self, source, destination, *, name=None): | ||
| super().__init__(name=name) | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current implementation can be simplified by using
tf.trunc. More importantly, it returns a float tensor for integer inputs, which is inconsistent with other backends andnumpy.fmod. The result should be cast back to the original integer dtype if the inputs were integers.