Unable to apply transfer learning to certain models

Hello Huggingface community - As a researcher into NLP I am impressed with Huggingface. I am looking currently at sentiment analysis of tweets. I began with a Bert-base-cased model, and fine-tuned it with a 5000 tweet labelled data-set to recognise sentiment as either positive negative or neutral. However, when I tried to fine-tune 'NLPtown’s BERT cased sentiemnt model instead - I received the following error…

Epoch 1 / 20

ValueError Traceback (most recent call last)
in ()
18
19 #train model
—> 20 train_loss, _ = train()
21
22 #evaluate model


2 frames


in forward(self, sent_id, mask)
33
34 #pass the inputs to the model
—> 35 _, cls_hs = self.bert(sent_id, attention_mask=mask)
36
37 #print(“Output width of this transformer is”,cls_hs.shape[1])

ValueError: not enough values to unpack (expected 2, got 1)

The Python code which worked seamlessly previously on a base BERT model now no longer functions. I will appreciate some steerage please on this issue and am happy to provide more details as necessary. For the moment I shall simply stste that code is Python 3.8 running on a Windows 10 laptop within a COLAB - thanks in advance! Mark