Skip to content

Commit 4538375

Browse files
format
1 parent d77f7bf commit 4538375

File tree

3 files changed

+7
-13
lines changed

3 files changed

+7
-13
lines changed

guides/ipynb/writing_quantization_compatible_layers.ipynb

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -249,8 +249,7 @@
249249
"back to floating-point.\n",
250250
"\n",
251251
"The base `keras.Layer` class automatically dispatches to this method when the\n",
252-
"layer is quantized. Your regular call() method will be used for the\n",
253-
"full-precision forward pass.\n",
252+
"layer is quantized, without requiring you to wire it up manually.\n",
254253
"\n",
255254
"The INT8 path mirrors the float computation `y = x * w` but performs:\n",
256255
"\n",

guides/md/writing_quantization_compatible_layers.md

Lines changed: 5 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -175,8 +175,7 @@ quantized variables allocated in `_int8_build(...)` and de-scales the output
175175
back to floating-point.
176176

177177
The base `keras.Layer` class automatically dispatches to this method when the
178-
layer is quantized. Your regular call() method will be used for the
179-
full-precision forward pass.
178+
layer is quantized, without requiring you to wire it up manually.
180179

181180
The INT8 path mirrors the float computation `y = x * w` but performs:
182181

@@ -294,8 +293,8 @@ print("SimpleScale INT8 sample:", y_int8[0].numpy())
294293

295294
<div class="k-default-codeblock">
296295
```
297-
SimpleScale FP32 sample: [ 0.00074363 -0.02807784 -0.0032404 -0.03456082]
298-
SimpleScale INT8 sample: [ 0.00074166 -0.0279077 -0.00322246 -0.03456089]
296+
SimpleScale FP32 sample: [-0.00359688 0.00296069 -0.00846314 0.00070467]
297+
SimpleScale INT8 sample: [-0.00359092 0.00290875 -0.00846319 0.00070462]
299298
```
300299
</div>
301300

@@ -548,11 +547,8 @@ print("Loaded INT8 sample:", y_loaded[0].numpy())
548547

549548
<div class="k-default-codeblock">
550549
```
551-
SimpleScale INT8 sample: [-0.00047286 0.02926966 -0.00708966 0.03041461]
552-
Loaded INT8 sample: [-0.00047286 0.02926966 -0.00708966 0.03041461]
553-
554-
/Users/jyotindersingh/miniconda3/envs/keras-io-env-3.12/lib/python3.12/site-packages/keras/src/models/model.py:472: UserWarning: Layer InputLayer does not have a `quantize` method implemented.
555-
warnings.warn(str(e))
550+
SimpleScale INT8 sample: [0.00825868 0.01510935 0.02154977 0.00205997]
551+
Loaded INT8 sample: [0.00825868 0.01510935 0.02154977 0.00205997]
556552
```
557553
</div>
558554

guides/writing_quantization_compatible_layers.py

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -174,8 +174,7 @@ def _int8_build(self, kernel_shape):
174174
back to floating-point.
175175
176176
The base `keras.Layer` class automatically dispatches to this method when the
177-
layer is quantized. Your regular call() method will be used for the
178-
full-precision forward pass.
177+
layer is quantized, without requiring you to wire it up manually.
179178
180179
The INT8 path mirrors the float computation `y = x * w` but performs:
181180

0 commit comments

Comments
 (0)