I get an error about unmatched signatures when trying to predict with a model, that has been loaded from a TF SavedModel
.
Here is a code example to demonstrate this unintuitive behavior:
Save and reload model from TF SavedModel
import tensorflow as tf
from transformers import TFAutoModelForSequenceClassification, AutoTokenizer
model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased")
model.save_pretrained("model_test", saved_model=True)
model = tf.keras.models.load_model("model_test/saved_model/1")
Try to predict
example_texts = ["Hello, World"] *32
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
features = dict(tokenizer.batch_encode_plus(example_texts, return_tensors="tf"))
test_ds = tf.data.Dataset.from_tensor_slices(features).batch(16)
preds = model.predict(test_ds, verbose=1)
Resuling error:
ValueError: Could not find matching function to call loaded from the SavedModel. Got:
Positional arguments (11 total):
* {'input_ids': <tf.Tensor 'input_ids_1:0' shape=(None, 5) dtype=int32>, 'token_type_ids': <tf.Tensor 'input_ids_2:0' shape=(None, 5) dtype=int32>, 'attention_mask': <tf.Tensor 'input_ids:0' shape=(None, 5) dtype=int32>}
* None
* None
* None
* None
* None
* None
* None
* None
* None
* False
Keyword arguments: {}
Expected these arguments to match one of the following 2 option(s):
Option 1:
Positional arguments (11 total):
* {'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids/input_ids')}
* None
* None
* None
* None
* None
* None
* None
* None
* None
* True
Keyword arguments: {}
Option 2:
Positional arguments (11 total):
* {'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids/input_ids')}
* None
* None
* None
* None
* None
* None
* None
* None
* None
* False
Keyword arguments: {}
This can be solved by performing inference the following way:
preds = model.signatures["serving_default"](**features)
But I still would like to understand the previous error! Great if you could point me in the right direction @sgugger