Description
I try to convert a toy model with a tensorflow.keras.layers.ConvLSTM2D layer to ONNX with tf2onnx. The TF 2 model definition is the following:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
class EncoderCell(keras.Model):
def __init__(self):
super(EncoderCell, self).__init__()
self.rnn1 = layers.ConvLSTM2D(
filters=64,
kernel_size=(3, 3),
strides=(2, 2),
padding='same',
return_state=True)
def call(self, inputs):
x = self.rnn1(inputs[0], initial_state=[inputs[1][:, 0], inputs[1][:, 1]])
return x
model = EncoderCell()
img = tf.random.uniform(shape=[ 1, 1, 256, 384, 32])
state1 = tf.zeros((1, 2, 512 // 4, 768 // 4, 64))
y = model.predict( [img, state1] )
model.save("conv2DLSTM_toy_model")
After the script has been executed in TF 2.6 and the model has been saved, I use on the command-line
python -m tf2onnx.convert --saved-model conv2DLSTM_toy_model --opset 10 --output onnx_models/conv2DLSTM_toy_model_opset10.onnx
With tf2onnx v1.9.2 I receive the error message ValueError: graph output encoder_cell/conv_lst_m2d/while/Identity_4:0 not exist.
If I use v1.10.0 from github, the error message has vanished. However, for both versions, the ONNX graph looks like the following
Problem: input_1 is separated from the ConvLSTM2D layer / loop operator and thus I cannot compile the model with TensorRT. Thereby, it does not matter, if ONNX opset 10, 11, 12 or 13 is used.
Apparently, the conversion of the ConvLSTM2D layer to ONNX does not work properly. Do you plan to fix this issue in the near future?
The test scripts in the test folder in the github repository also do not include a test for TF 2 ConvLSTM2D.