How to finetune the facebook/bart-large-mnli model using HF Trainer?

The peculiar thing about this model is that we must have a premise (text), and a hypothesis (text), and then we’ll have some labels.
In the tutorials available, we usually only have 1 text field. So, where should I put the remaining one?
Also, what type of labels should I use? ‘1’ for entailment, and ‘0’ for contradiction?

Also, can I use a simple pandas DataFrame with columns of the form Index([‘input_ids’, ‘attention_mask’, ‘label’], dtype=‘object’) as the training data for the HF Trainer, or do I need to add some more ‘sauce’?

I’ve just found this… glue · Datasets at Hugging Face