Problem fine-tuning a model with Seq2Seq Trainer

Hi I’m following the tutorial Summarization for fine tuning a model similar to bart on the text summarization task

training_args = Seq2SeqTrainingArguments(
   output_dir="./results",
   evaluation_strategy="epoch",
   learning_rate=2e-5,
   per_device_train_batch_size=8,
   per_device_eval_batch_size=8,
   weight_decay=0.01,
   save_total_limit=3,
   num_train_epochs=1,
   remove_unused_columns=False
)

trainer = Seq2SeqTrainer(
   model=model,
   args=training_args,
   train_dataset=tokenized_train_df,
   eval_dataset=tokenized_val_df ,
   tokenizer=tokenizer,
   data_collator=data_collator,
)

trainer.train()

I have around 53000 rows in my train dataset and 17000 rows in my validation dataset, but even i’m following every step from the tutorial, everything crashes without even an error message, can anyone help me understanding this??

we need to see how you instantiate the model