Skip to content

Shifted segmentation mask output when converting from keras to onnx models #2366

Open
@kgossage

Description

@kgossage

Describe the bug
I trained a U-net style segmentation model using the exact model generation code found here: (https://keras.io/examples/vision/oxford_pets_image_segmentation/).

The segmentation mask lines up properly with the input RGB image (640x640x3 input size with 2 output classes) when run using the Keras model, but is shifted 15x15 pixels when running the model converted to onnx (onnx coords are shifted smaller than keras coords by 15 pixels in each dimension). I've tried tf2onnx.convert and console and tf2onnx.convert.from_keras python commands to convert the model and both have the same output. I've tried opsets 12-18, but no difference. This Unet style model takes the image from 640x640 to 40x40 before scaling back up to 640x640. This is a factor of 16x16 and I feel the problem is one of the upscaling layers is erroneously shifting (or failing to shift more likely) at each step.

Urgency
ASAP

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 18.04*): Ubuntu 22.04.4 LTS
  • TensorFlow Version: 2.15.1
  • Python version: 3.9.19
  • ONNX version (if applicable, e.g. 1.11*): 1.17.0
  • ONNXRuntime version (if applicable, e.g. 1.11*): tf2onnx=1.16.1/15c810

To Reproduce

Screenshots

Additional context

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugAn unexpected problem or unintended behavior

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions