Adding accuracy, precision, recall and f1 score metrics during training

Hello!
I am fine-tuning herbert-base model for token classification of named entities.

Basically I am going through this tutorial with minor changes to data preprocessing, pretrained base model and datasets.
I would love to see evaluation f1 score and accuracy throughout training. What would be most :hugs: way to add those metrics after every epoch? I see option to load metrics from :hugs: datasets. But I cannot find where should I put loaded objects.

I know I could override Trainer class or make my own Callback etc. However what way would be the best? What would integrate seamlessly with TensorBoard?

Thank you for help in advance!

1 Like

hi, you can define your computing metric function and pass it into the trainer. Here is an example of computing metrics.

define accuracy metrics function

from sklearn.metrics import accuracy_score, precision_recall_fscore_support
def compute_metrics(pred):
labels = pred.label_ids
preds = pred.predictions.argmax(-1)
precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average=‘weighted’)
acc = accuracy_score(labels, preds)
return {
‘accuracy’: acc,
‘f1’: f1,
‘precision’: precision,
‘recall’: recall
}