Custom metrics with extra data?

I have a multi class model that I want to evaluate (during training on the eval set using additional data that is contained in the dataset. The extra data is a column that provides a grouping - a document id.

These are options I am considering.

  • The compute_metric function you can pass into the Trainer is only passed in an EvalPrediction which contains the predictions and labels, not the extra data, so this is not possible.
  • I can subclass the Trainer and then override the evaluation_loop. It would be a very much a copy paste effort with overriding the call to compute_metrics to pass in extra data. I don’t know if this would also affect the training time?
  • I can utilise a callback at the end of epoch and calculate the metrics I want outside of the evaluation loop. I’m not sure whether I will have access to all data here though.

Any other options that I am missing? What is the best way of accessing extra data in the evaluation metrics?

1 Like

cc @lvwerra do you have an idea ?

Thanks for calling attention to it, I ended up choosing the last option, implement an end of epoch override. Worked well, unfortunately, it also meant I had to implement saving the best model myself too as it’s the same “metric” being used.

1 Like

An alternative way might be to either define a global variable or create something like a compute_metric factory:

def compute_metric_with_extra(extra_data):
    def compute_metric(preds, refs):
        # do something with preds, refs and extra_data
        return score
    return compute_metric

compute_metric = compute_metric_with_extra(extra_data)

You should be able to pass this function to the trainer. Sorry I was too late :slight_smile:

5 Likes

@lvwerra Thanks for the workaround. I have a related followup. In my usecase, the metric depends on each individual datapoint (and let’s say each datapoint has a unique “datapoint ID”) and hence, while calculating the metric for a particular result in “preds”, I also need to know which datapoint ID it corresponds to. Hence, in “preds”, I would possibly like to have “input_ids”, “labels”, “predictions” and “datapoint_id”. Is there any way to achieve this ?

1 Like

@Ivwerra’s suggestion can be made to work the way you want by passing datapoint_id aligned with labels.

Hi @anmolagarwal999 @Agniva I wonder if you could share more details on how you implement this feature? In my case, aligning either predictions or input_ids to “datapoint ID” (with some sort of a dictionary look-up?) would be too much for a large set of long sequences. Alternatively, I am thinking if I can “hack” the EvalPrediction.inputs (i.e. input_ids if i’m not mistaken) or somehow take advantage of the index order. However, I am not confident if either of the two alternatives would work. May I get your opinions on this? Many thanks!

Is there a way to add an external variable but that changes during training (like model for eg.)
I want to evaluate a clip model on zero_shot classification, so on labels that are not in EvalPred.label_ids.
I can give these labels as extra_data like explained above in the answers, but their embeddings will change as the model is training.
Is there a way to do this ?
Thanks !

is the order of EvalPrediction that goes into compute_metrics the same as in the evaluation dataset ?