Error finetuning XLM-RoBERTa-Large when training

I followed the guide on the documentation HERE while pre-training a zero-shot classifier. These are screenshots of Tokenization, Compute Metrics, Training Arguments, and Trainer.