@@ -15,6 +15,11 @@ and they perform reduction by default when used in a standalone way (see details
1515
1616{{toc}}
1717
18+ ---
19+
20+ ## Base Loss API
21+
22+ {{autogenerated}}
1823
1924---
2025
@@ -74,8 +79,9 @@ A loss is a callable with arguments `loss_fn(y_true, y_pred, sample_weight=None)
7479By default, loss functions return one scalar loss value per input sample, e.g.
7580
7681```
77- >>> keras.losses.mean_squared_error(tf.ones((2, 2,)), tf.zeros((2, 2)))
78- <tf.Tensor: shape=(2,), dtype=float32, numpy=array([1., 1.], dtype=float32)>
82+ >>> from keras import ops
83+ >>> keras.losses.mean_squared_error(ops.ones((2, 2,)), ops.zeros((2, 2)))
84+ <Array: shape=(2,), dtype=float32, numpy=array([1., 1.], dtype=float32)>
7985```
8086
8187However, loss class instances feature a ` reduction ` constructor argument,
@@ -89,18 +95,18 @@ which defaults to `"sum_over_batch_size"` (i.e. average). Allowable values are
8995
9096```
9197>>> loss_fn = keras.losses.MeanSquaredError(reduction='sum_over_batch_size')
92- >>> loss_fn(tf .ones((2, 2,)), tf .zeros((2, 2)))
93- <tf.Tensor : shape=(), dtype=float32, numpy=1.0>
98+ >>> loss_fn(ops .ones((2, 2,)), ops .zeros((2, 2)))
99+ <Array : shape=(), dtype=float32, numpy=1.0>
94100```
95101```
96102>>> loss_fn = keras.losses.MeanSquaredError(reduction='sum')
97- >>> loss_fn(tf .ones((2, 2,)), tf .zeros((2, 2)))
98- <tf.Tensor : shape=(), dtype=float32, numpy=2.0>
103+ >>> loss_fn(ops .ones((2, 2,)), ops .zeros((2, 2)))
104+ <Array : shape=(), dtype=float32, numpy=2.0>
99105```
100106```
101107>>> loss_fn = keras.losses.MeanSquaredError(reduction='none')
102- >>> loss_fn(tf .ones((2, 2,)), tf .zeros((2, 2)))
103- <tf.Tensor : shape=(2,), dtype=float32, numpy=array([1., 1.], dtype=float32)>
108+ >>> loss_fn(ops .ones((2, 2,)), ops .zeros((2, 2)))
109+ <Array : shape=(2,), dtype=float32, numpy=array([1., 1.], dtype=float32)>
104110```
105111
106112Note that this is an important difference between loss functions like ` keras.losses.mean_squared_error `
@@ -109,13 +115,13 @@ does not perform reduction, but by default the class instance does.
109115
110116```
111117>>> loss_fn = keras.losses.mean_squared_error
112- >>> loss_fn(tf .ones((2, 2,)), tf .zeros((2, 2)))
113- <tf.Tensor : shape=(2,), dtype=float32, numpy=array([1., 1.], dtype=float32)>
118+ >>> loss_fn(ops .ones((2, 2,)), ops .zeros((2, 2)))
119+ <Array : shape=(2,), dtype=float32, numpy=array([1., 1.], dtype=float32)>
114120```
115121```
116122>>> loss_fn = keras.losses.MeanSquaredError()
117- >>> loss_fn(tf .ones((2, 2,)), tf .zeros((2, 2)))
118- <tf.Tensor : shape=(), dtype=float32, numpy=1.0>
123+ >>> loss_fn(ops .ones((2, 2,)), ops .zeros((2, 2)))
124+ <Array : shape=(), dtype=float32, numpy=1.0>
119125```
120126
121127When using ` fit() ` , this difference is irrelevant since reduction is handled by the framework.
0 commit comments