Description
Running into the following RuntimeError when running the adanet_tpu
tutorial using TF 2.2
and Adanet 0.9.0
:
All tensors outfed from TPU should preserve batch size dimension, but got scalar Tensor("OutfeedDequeueTuple:1", shape=(), dtype=int64, device=/job:tpu_worker/task:0/device:CPU:0)
I have made some minor changes to the original tutorial code, i.e. replacing of tf.contrib
module with tf.compat.v1
equivalents where applicable, etc. as per the following Google Colab: https://colab.research.google.com/drive/1IVwzPL50KcxkNczaEXBQwCFdZE2kDEde
I have experienced the same issue when running TF 2
with previous Adanet=0.8.0
version when training my own GCP project models on Cloud TPUs. Further details on this can be found on stackoverflow here :
https://stackoverflow.com/questions/62266321/tensorflow-2-1-using-tpuestimator-runtimeerror-all-tensors-outfed-from-tpu-sho
Looking to establish whether I am potentially missing something for the migration over to TF 2 using Adanet?