I’ve started playing with the evaluate library following the quick tour with a multi classification problem and I’ve a few doubts. I’d like to use for example
metrics = evaluate.combine(['precision', 'recall']), but when calling
metrics.compute(references=[2, 2, 1, 0], predictions=[2, 1, 1, 2], average='weighted') it seems not getting the
ValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted'].
Not a big concern as I can compute the metrics individually, but I was wondering if there’s a way to do that.
On the other hand, same topic but regarding the
evaluator approach. Is it possible to combine metrics while calling for example
evaluate.load('text-classification').compute(...) and/or to pass the needed average strategy for multi classification problems?.
Best regards and thanks for the great work.