Finetuning T5 on SQUADv2 with Seq2SeqTrainer fails

Hi,
As specified in the topic name, I鈥檓 trying to finetune T5 with HF build-in Seq2SeqTrainer as provided here: transformers/examples/pytorch/question-answering/README.md at main 路 huggingface/transformers 路 GitHub
However, evaluation fails with the KeyError
小薪懈屑芯泻 褝泻褉邪薪邪 2023-06-19 胁 13.15.48

I鈥檓 running the train process with the command below:
python run_seq2seq_qa.py --model_name_or_path bigscience/mt0-large --dataset_name squad_v2 --context_column context --question_column question --answer_column answers --do_train --do_eval --per_device_train_batch_size 4 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 384 --doc_stride 128 --output_dir ~/tmp --pad_to_max_length False --max_train_samples 100 --max_eval_samples 100 --version_2_with_negative True --n_best_size 5 --overwrite_output_dir True --evaluation_strategy steps --prediction_loss_only False --log_level info --logging_dir ~/mt0_logs --report_to tensorboard --logging_strategy steps --logging_steps 50 --save_strategy no --disable_tqdm False --predict_with_generate True

How could I resolve this issue?

@sgugger @valhalla Guys, could you probably help me?