I have this:
But I need something like this:
precision
0 0.962
1 0.516
Hello! I want to follow up on this thread - I have been trying to the same thing and I am unable to figure out how to get accuracy and precision for each class in a classification task. Please help!
One approach might be to create your own metric function for that. I’ll write a function for illustration only, I haven’t tested it and its just based on my very flawed understanding. hopefully it will be a good starting point.
def compute_metrics(eval_preds):
logits, labels = eval_preds
predictions = np.argmax(logits, axis=-1)
classes = set(labels) # here I’m assuming that predictions & labels are just simple python lists, made of numbers that represent classes.return {precision: {c: len([0 for i in range(len(predictions)) if predictions[i]==labels[i]==c]) / len([0 for i in range(len(predictions)) if predictions[i]==c] for c in classes}}
the weird calculation at the end is super inefficient, but it should make it very obvious what I’m doing. similar code can be written for accuracy/f1 etc.
here’s a nice article that has the calculations underlying this: Accuracy, Precision, Recall or F1? | by Koo Ping Shung | Towards Data Science
I hope this helps!