Skip to content

Add INT16 support to SPACE_TO_DEPTH kernel#6396

Open
nj-1015 wants to merge 1 commit intogoogle-ai-edge:mainfrom
nj-1015:add-int16-space-to-depth-kernel
Open

Add INT16 support to SPACE_TO_DEPTH kernel#6396
nj-1015 wants to merge 1 commit intogoogle-ai-edge:mainfrom
nj-1015:add-int16-space-to-depth-kernel

Conversation

@nj-1015
Copy link

@nj-1015 nj-1015 commented Mar 14, 2026

Summary

  • Add kTfLiteInt16 to the type check in Prepare() in tflite/kernels/space_to_depth.cc
  • The Eval() function already handles 16-bit data via its TfLiteTypeGetSizeBits case 16 branch, so no execution logic changes are needed
  • Add INT16 test case to space_to_depth_test.cc

This enables INT16 quantized models (e.g. STATIC_WI8_AI16 from ai-edge-quantizer) to run on the LiteRT runtime.

Related PR: google-ai-edge/ai-edge-quantizer#445

Test plan

  • Add Int16 test case following the same pattern as existing int8 test

Add kTfLiteInt16 to the type check in Prepare(). The Eval() function
already handles 16-bit data via its TfLiteTypeGetSizeBits case 16
branch. This enables INT16 quantized models (e.g. STATIC_WI8_AI16
from ai-edge-quantizer) to run on the LiteRT runtime.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant