I am currently working on Kaggle’s Contradictory, My Dear Watson competition, in which we have to identify contradictions between two different sentences: premise and hypothesis.
For this task, I’ve decided to use both XLMRobertaTokenizer and TFXLMRobertaForSequenceClassification from the Transformers library.
The problem is that, when fine-tuning the model, I’m stuck on an accuracy closer to 33% on both the training and validation data. And as soon as I finish the training and predict on the testing set, I can see that the model is labeling every single sample the same class.
I have tried much stuff to make this work! I’ve tried changing to PyTorch, I’ve tried different preprocessing techniques, different batches,etc. Nothing can make it work!
I’ve noticed others on Kaggle with the same issue, but it doesn’t seem like they found any solution. I have also tried asking around on Discord, but no success so far.
For this reason, I’m posting on this forum for the first time to see if someone can help me find out what is wrong with my model.
You can see the full modeling here on this Kaggle Notebook: [NLP] Contradictory, My Dear Watson🕵️ | Kaggle
I’m also leaving the Loss per Epochs plots for you to see how the model behaves during training.
Does anybody have any idea what might be wrong?
Thank you in advance!
Luis Fernando Torres