Trainer doesn't get to compute_metrics after upgrading to v4.32

Hi,

I upgraded transformers from v4.9 to v4.32 and now the trainer doesn’t get into my compute_metrics function. Nothing has changed in the code besides the upgrade.

Debugging resulted in finding the place the compute_metrics is being skipped- trainer.evaluation_loop(), line 3255:

# Metrics!
        if self.compute_metrics is not None and all_preds is not None and all_labels is not None:
            if args.include_inputs_for_metrics:
                metrics = self.compute_metrics(
                    EvalPrediction(predictions=all_preds, label_ids=all_labels, inputs=all_inputs)
                )
            else:
                metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))
        else:
            metrics = {}

The if condition is False because all_labels=None. I think its because of the prediction_step() that happens before, which returns labels=None and loss=None.

Would appreciate any help with that!
Thanks :slight_smile:

1 Like

Hello,
I appear to have the same problem. Any updates?

I have the same issue :sob: I am tuning a very basic AutoModelForSequenceClassification for a multi-class problem with 10 labels.

@SantiagoCorley @almoggu for me, the problem occurred in this line in the Trainer’s prediction_step:

has_labels = False if len(self.label_names) == 0 else all(inputs.get(k) is not None for k in self.label_names)

My problem was that I had set label_names in the TrainingArguments to the keys in my label2id dictionary (the actual names of the labels that I am trying to predict.) However, in the documentation, it says the following about label_names:

label_names (List[str] , optional ) — The list of keys in your dictionary of inputs that correspond to the labels.

So, my problem was that I misunderstood the label_names argument.

Simply not setting label_names in the TrainingArguments, solved the problem for me :muscle: