Inference error when loading a previously trained saved model

I’m trying to find a solution to an unusual error that occurs during inference on my trained model. My process begins by creating a model as follows:

label2id = { “not subject”: 0, “entail subject”: 1}
id2label = {y:x for x, y in label2id.items()}
config = AutoConfig.from_pretrained(“bert-large-uncased”)
config.label2id = label2id
config.id2label = id2label
config.num_labels = len(label2id)
model = AutoModelForSequenceClassification.from_pretrained(“bert-large-uncased”, config=config)

I won’t replicate all the code, but my code then creates a tokenized dataset, a function for calculating metrics, training arguments and a trainer, and trains the model. My training data is a collection of text lines each with a label of 0 or 1. When the training is completed I save the model using the following:

model.save_pretrained(’/path/to/save/folder’)

Then I load my validation dataset and create a classifier with the following:

classifier = pipeline(‘zero-shot-classification’, tokenizer=tokenizer, model=model)

and collect inference for each line in my validation dataset with:

for out in classifier(KeyDataset(validation_dataset, ‘field_with_text_for_inference’), candidate_labels=[“not subject”, “entail subject”],):

So here’s the weirdness that I can’t seem to figure out. If I perform all of the above steps in my notebook in sequence: load and tokenize the datasets, create the training arguments and trainer and train the model, then perform the inference, everything works perfectly and I get the results I expect. If, however, I instead skip the training steps and simply load my trained model from the save directory and try to run the validation data through the model, I get a “KeyError: ‘logits’” every time. I’ve tried everything I could think of and searched previous questions and Googled the error, but I can’t seem to find an answer to my issue. It feels like somehow my model backup isn’t saving all the details of the model required for inference; I can’t think of any other reason why everything works if the model in memory was just trained but fails if I’m restoring it from disk.

I do also get the following warnings when loading my model from disk:

Some weights of the model checkpoint at /path/to/save/folder were not used when initializing BertModel: [‘classifier.weight’, ‘classifier.bias’]
- This is expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This is NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).

Any help would be greatly appreciated.

Closing out this issue. For anyone who might run into this problem themselves, the issue is one of restoring to the proper model type. Depending on what you have trained your model for, you cannot simply use the BertModel.from_pretrained() method, you need to restore it using the class specific to your model. So in my case here I’m using a zero-shot-classifier which is a BertForSequenceClassification instance (you can see what yours is in your config.json) so I have to use the BertForSequenceClassification.from_pretrained() method to restore my model from a backup. In hindsight it’s pretty obvious but if you are new to the API it can be somewhat less so.