Target size (torch.size([16])) must be the same as input size (torch.size([16, 9]))

I am new to Hugging face models and deep learning in general. I search online and found that the above error occurs because I am getting 9 columns as prediction where as I should have just one. The exact error is:

File "train.py", line 107, in <module>
    trainer.train()
  File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1272, in train
    tr_loss += self.training_step(model, inputs)
  File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1732, in training_step
    loss = self.compute_loss(model, inputs)
  File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1766, in compute_loss
    outputs = model(**inputs)
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 756, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/opt/conda/lib/python3.6/site-packages/transformers/models/bert/modeling_bert.py", line 1540, in forward
    loss = loss_fct(logits, labels)
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 756, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/opt/conda/lib/python3.6/site-packages/torch/utils/smdebug.py", line 72, in run
    return_value = function(*args, **kwargs)
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/loss.py", line 632, in forward
    reduction=self.reduction)
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/functional.py", line 2580, in binary_cross_entropy_with_logits
    raise ValueError("Target size ({}) must be the same as input size ({})".format(target.size(), input.size()))
ValueError: Target size (torch.Size([16])) must be the same as input size (torch.Size([16, 9]))

I am using the tutorial mentioned in the following link huggingface-sagemaker-workshop-series/lab_1_default_training.ipynb at main · philschmid/huggingface-sagemaker-workshop-series · GitHub

the model I am using is :- microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract In the tutorial they used a bert model.

Rest all is the same except the way they prepared the dataset: their dataset has the following format:

train_dataset.features
{'text': Value(dtype='string', id=None),
 'label': ClassLabel(num_classes=6, names=['sadness', 'joy', 'love', 'anger', 'fear', 'surprise'], names_file=None, id=None)}

Where as in my case I have

train_dataset.features
{'labels': Value(dtype='float64', id=None),
 'text': Value(dtype='string', id=None),}

Is the labels column causing the issue. I have 9 unique classes which is why I am getting the tensor of [16,9] (the tutorial has 6 unique classes). My training batch size is 16. For the classes its just strings from 1 to 9.

I am having the same error, did you manage to solve it?