I’m working on creating a custom evaluation metric using the Hugging Face evaluate library, following this guide:
My custom metric doesn’t rely on the default predictions and references parameters, but the compute() method from the EvaluationModule class seems to require them to be passed, even though they are irrelevant for my use case.
The issue I’m facing is that calling compute() without predictions and references leads to errors because the method expects these arguments to be added (via add_batch or add) before calling compute(). However, for my custom metric, these arguments are unnecessary.
Does anyone know how I can work around this to use compute() without needing predictions and references?
Thanks for the response, but it’s not quite what I’m looking for.
I want to add a new metric to Hugging Face Spaces so it can be accessible to everyone. The metric is designed to evaluate models post-training.
However, my metric doesn’t use the predictions and references parameters in the compute method, as shown in the guide. I’m asking if there’s a solution to this issue.
Hmm, so we really need to inherit from Evaluate. It’s difficult to change the arguments themselves in this case, so I can only think of receiving the arguments as a dummy and discarding the received arguments.
def _compute(self, references, predictions):
#em = sum([r==p for r, p in zip(references, predictions)])/len(references)
em = True
return {"exact_match": em}