Using Trainer at inference time

Normally, the Trainer saves your trained model in a directory. You can specify this with the output_dir argument when instantiating the TrainingArguments.

You can then instantiate your trained model using the .from_pretrained() method. Suppose that you have fine-tuned a BertForSequenceClassification model, then you can instantiate it as follows:

from transformers import BertForSequenceClassification

model = BertForSequenceClassification.from_pretrained("path_to_the_directory")

You can then make batched predictions as follows:

from transformers import BertTokenizer

tokenizer = BertTokenizer.from_pretrained("path_to_the_directory")

text = ["this is one sentence", "this is another sentence"]
encoding = tokenizer(text, return_tensors="pt")

# forward pass
outputs = model(**encoding)
predictions = outputs.logits.argmax(-1)
1 Like