i am getting this error, i am using Bart model, i have already fine tuned this model using trainer Seq2SeqTrainer, and output dir path i have given for my google drive, now i am trying to resume from last checkpoint using âresume_from_checkpointâ argument but i am getting this error. Here is my code and the dataset i have used is IterableDataset.
tokenized_datasets = tokenized_datasets.with_format(âtorchâ)
training_args = Seq2SeqTrainingArguments(
output_dir="/content/gdrive/My Drive/Colab Notebooks/Code/models",
evaluation_strategy=âepochâ,
learning_rate=3e-5,
per_device_train_batch_size=4,
per_device_eval_batch_size=2,
weight_decay=0.01,
save_total_limit=1,
num_train_epochs=5,
predict_with_generate=True,
fp16=True,
save_strategy=âepochâ,
metric_for_best_model=âeval_rouge1â,
greater_is_better=True,
seed=41,
generation_max_length=max_target_length,max_steps=10000,load_best_model_at_end=True,
resume_from_checkpoint="/content/gdrive/My Drive/Colab Notebooks/Code/models"
)
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets[âtrainâ],
eval_dataset=tokenized_datasets[âvalidationâ],
tokenizer=tokenizer,
data_collator=data_collator,compute_metrics=compute_metrics,
callbacks = [EarlyStoppingCallback(early_stopping_patience = 3,early_stopping_threshold=0.0)]
)
trainer.train()