Logits to probability conversion for compute_metric() during finetuning using Trainer class

I am fine tuning Roberta-base for a binary text classification task and I want to use ‘roc-auc’ metric while evaluating during epochs. how to I correctly convert logits to probability for the compute_metric function

i have seen code for ‘accuracy’ but want to check if this code will work for roc_auc

metric = load_metric(“roc_auc”)
def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = logits.softmax(dim=-1)[0]
#or
return metric.compute(predictions=predictions, references=labels)

and I want to feed it to Trainer

trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets[‘train’],
eval_dataset=tokenized_datasets[‘validation’],
compute_metrics=compute_metrics,
)

Is the compute_metric function correct?

1 Like