Add a new evaluation metric without using the evaluate-cli

Hi, I am working on developing an evaluation framework for my deep learning project. I want to know if there is an easy of adding/registering custom evaluation metrics so that I can invoke the metrics as follows:

custom_metric.py

@register_metric("custom_metric")
def custom_metric(references, predictions):
    ...
    return results

evaluation.py

import evaluate

metrics_to_use = [
    evaluate.load("accuracy"),
    evaluate.load("f1")
    evaluate.load("custom_metric")
]

...

results = [metric(references, predictions) for metric in metrics_to_use]

Thanks!