Fine-tuning Zero-shot models

I am using facebook/bart-large-mnli for my text classification task. The labels used during inference would be a subset of a list of labels. So, i want to fine-tune the model on a custom dataset with ~1000 examples.

I understand that @joeddav has explained it in this comment. But I am facing difficulties in implementing this. Can anyone please share the snippet that they used or direct me to any source that has implemented the fine-tuning

What part are you having trouble with?

@anwarika don’t understand how to set ‘entailment’ or ‘contradiction’ as a target.

So to understand you have 1000 examples that are not properly labelled? If they are properly labeled you could just fineTune it. There are some examples in the Transformers github repo

Could anyone please share some links to this examples?, I haven’t found them yet. Thanks for your help.