I’m trying to do a finetuning without an evaluation dataset.
For that, I’m using the following code:
training_args = TrainingArguments(
output_dir=resume_from_checkpoint,
evaluation_strategy="epoch",
per_device_train_batch_size=1,
)
def compute_metrics(pred: EvalPrediction):
labels = pred.label_ids
preds = pred.predictions.argmax(-1)
f1 = f1_score(labels, preds, average="weighted")
acc = accuracy_score(labels, preds, average="weighted")
return {"accuracy": acc, "f1": f1}
trainer = Trainer(
model=self.nli_model,
args=training_args,
train_dataset=tokenized_datasets,
compute_metrics=compute_metrics,
)
However, I get ValueError: Trainer: evaluation requires an eval_dataset.
. I thought that by default, Trainer does no evaluation… at least in the docs, I got this idea…