How do I change the classification head of a model?

I like to change the number of labels that a trained model has. I am loading a model that was trained on 17 classes and I like adapt this model to my own task. Now if I simply change the number of labels like this:

model_checkpoint ="vblagoje/bert-english-uncased-finetuned-pos" 
model = AutoModelForTokenClassification.from_pretrained(model_checkpoint,num_labels=2)

I get an error saying:

RuntimeError: Error(s) in loading state_dict for BertForTokenClassification:
        size mismatch for classifier.weight: copying a param with shape torch.Size([17, 768]) from checkpoint, the shape in current model is torch.Size([2, 768]).
        size mismatch for classifier.bias: copying a param with shape torch.Size([17]) from checkpoint, the shape in current model is torch.Size([2]).

My question is: How do I replace the classification head?

Thanks a lot :hugs:

The reason is: you are trying to use mode, which has already pretrained on a particular classification task. You have to remove the last part ( classification head) of the model.

This is actually a kind of design fault too. In practice

( BERT base uncased + Classification ) = new Model .

is your model. Now, if you want to reuse them on a different tasks, either use BERT base uncased or extract that part from new Model.

I wanted to test if the training on the POS task results in better scores compared to using just the pain BERT base.

So my question is: How do I extract the “base” part from a trained model and add a new head?

I am not sure :slight_smile:

Hi from what i noticed the weights your using are already fine tuned for token classification (the classifier has been trained for said task), i recommend you fine tune on the bert base case as such:

from transformers import AutoModelForTokenClassification
model = AutoModelForTokenClassification('bert-base-uncased', num_labels=2)
# Start your own training

or if you want to write your own as requested with a custom classifier head

import torch.nn as nn
from transformers import AutoModel
class PosModel(nn.Module):
    def __init__(self):
        super(PosModel, self).__init__()
        self.base_model = AutoModel.from_pretrained('bert-base-uncased')
        self.dropout = nn.Dropout(0.5)
        self.linear = nn.Linear(768, 2) # output features from bert is 768 and 2 is ur number of labels
    def forward(self, input_ids, attn_mask):
        outputs = self.base_model(input_ids, attention_mask=attn_mask)
        # You write you new head here
        outputs = self.dropout(outputs[0])
        outputs = self.linear(outputs)
        return outputs

model = PosModel()'cuda')

I am assuming your using pytorch for this.

This is a good solution if you train new a model based on a LM like bert-based-uncased from scratch. I try to replace the classification head of a model. Running your first code with a pre-trained model for token classification will result in an error message (see my sample above).
Running your second code will add a new classification layer on top of the existing one when you run it with a pos model like “vblagoje/bert-english-uncased-finetuned-pos”.

Your error basically is a mismatch of the final layer which is the classifier part:

vblagoje/bert-english-uncased-finetuned-pos - this is already finetune weights for 17 classes

bert-base-uncased - you need to use this where isn’t finetuned for anything yet, so this is what you want to use for just fine tuning the 2 classes you have

Unfortunately the author of the model did not write anything to specify what he fine tuned on so i have no idea myself but this model has already been fine tuned, therefore you cannot continue fine tuning it


Technically it should not be an issue to remove a classification head. This is the main idea of transfer learning and people do this all the time with CNNs. I like to check if this works for NLP as well.

I mean for this case here, when you load the weights it expects the linear layer to have a size of 17 but u specified 2, this is where the error comes from

size mismatch for classifier.weight: copying a param with shape torch.Size([17, 768]) from checkpoint, the shape in current model is torch.Size([2, 768]).

Well technically you can do that by deleting the last couple of layers after loading it, after then fine tuning it. again

# did not test this out
model = AutoModelForTokenClassification.from_pretrained(model_checkpoint,num_labels=17)
model.classifier = nn.Linear(768, 2)
# Run training
1 Like

Well, I tried that and added two lines to set the correct number of labels:

model.classifier = nn.Linear(786,1)
model.num_labels = 2
model.config.num_labels = 2

printing the model shows that this worked.

  (dropout): Dropout(p=0.1, inplace=False)
  (classifier): Linear(in_features=786, out_features=1, bias=True)

However, running this with the trainer class results in this error:

mat1 dim 1 must match mat2 dim 0

I suspect that I missed something or broke some clever autoconfiguration. Thats why I wrote my question here.

model.classifier = nn.Linear(786,1) # you have 1 class? I think you should change it to 2 
model.num_labels = 2 # while here you specify 2 classes so its a bit confusing

Unless you are aiming for a sigmoid function for your last layer is thats why your adding 1 class then i think you need to change to your loss function to bcewithlogitsloss

1 Like

Check this. Its neatly implemented here.
And a pretty neat architecture as well.

model —> downstream_task . Well defined.

Thanks for your help! It was way too late when I wrote this :see_no_evil:

It turned out that the easiest way to solve my initial question is this:

config = AutoConfig.from_pretrained(model_checkpoint)
config.num_labels = 2
model = AutoModelForTokenClassification.from_config(config)