Fine Tuning bart-large-mnli on only Entailments

Hi everyone! This is my first question on the forum, so please excuse any mistakes with formatting.

So, I’m currently working on a project which requires me to fine tune bart-large-mnli on a custom dataset, and then use the fine tuned model for Zero Shot Classification.

My custom dataset has only entailments (125 per class for 5 classes = 625 entailments). I’m pretty sure I’ve written my fine tuning code perfectly, but I’m getting very poor results after fine tuning. As in, un fine tuned BART was more confident and more frequently accurate than my fine tuned BART. I’m very new to working with LLMs. I have a couple suspicions.

  1. Is it a problem if I fine tune only on entailments? Is that the issue? Will I also have to include an equal number of contradictions to see a better performance after fine tuning?

  2. Do, I need to expand my dataset? Is my custom dataset too small? I am working in a few shot setting after all.

  3. What should the num_labels parameter be set to? Since, I’m fine tuning only on entailments in my current setting, should I have to change its value?

I’d really appreciate any insights here !