Hi there,
Trying to use TextVectorization (see the Colab experiment) on the mentioned hardware fails with this error:
InternalError: {{function_node __wrapped__Cast_device_/job:localhost/replica:0/task:0/device:GPU:0}} 'cuLaunchKernel(function, gridX, gridY, gridZ, blockX, blockY, blockZ, 0, reinterpret_cast<CUstream>(stream), params, nullptr)' failed with 'CUDA_ERROR_INVALID_HANDLE' [Op:Cast] name:
I'm guessing one could hedge one's bet and always tokenize on CPU, but since older code worked without these caveats, I'm guessing you will want to keep it that way?
Thanks in advance!