I am trying to train a TFDistilBertForTokenClassification with TPU + Colab. I work around the problem of readying a TF dataset on GCS, but the model still fails to train. Prominently:
(0) INVALID_ARGUMENT: {{function_node __inference_train_function_26618}} Detected unsupported operations when trying to compile graph tf_distil_bert_for_token_classification_cond_true_22902 on XLA_TPU_JIT: PrintV2 (No registered ‘PrintV2’ OpKernel for XLA_TPU_JIT devices compatible with node {{node tf_distil_bert_for_token_classification/cond/PrintV2}}){{node tf_distil_bert_for_token_classification/cond/PrintV2}}
One approach is to outside compile the unsupported ops to run on CPUs by enabling soft placement tf.config.set_soft_device_placement(True). This has a potential performance penalty.
It seems to be using some unsupported op TPU doesn’t like. Any idea how to hack this?