I upgraded transformers from v4.9 to v4.32 and now the trainer doesnât get into my compute_metrics function. Nothing has changed in the code besides the upgrade.
Debugging resulted in finding the place the compute_metrics is being skipped- trainer.evaluation_loop(), line 3255:
# Metrics!
if self.compute_metrics is not None and all_preds is not None and all_labels is not None:
if args.include_inputs_for_metrics:
metrics = self.compute_metrics(
EvalPrediction(predictions=all_preds, label_ids=all_labels, inputs=all_inputs)
)
else:
metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))
else:
metrics = {}
The if condition is False because all_labels=None. I think its because of the prediction_step() that happens before, which returns labels=None and loss=None.
@SantiagoCorley@almoggu for me, the problem occurred in this line in the Trainerâs prediction_step:
has_labels = False if len(self.label_names) == 0 else all(inputs.get(k) is not None for k in self.label_names)
My problem was that I had set label_names in the TrainingArguments to the keys in my label2id dictionary (the actual names of the labels that I am trying to predict.) However, in the documentation, it says the following about label_names:
label_names (List[str] , optional ) â The list of keys in your dictionary of inputs that correspond to the labels.
So, my problem was that I misunderstood the label_names argument.
Simply not setting label_names in the TrainingArguments, solved the problem for me