Log Perplexity using Trainer

Hi there,
I am wondering, what would be the optimal solution to also report and log perplexity during the training loop via the Trainer API. How would the corresponding compute_metrics function look like. So far I tried without success since I am not 100% sure how the EvalPrediction output would look like.
Thanks in advance :slight_smile:

1 Like

This brings me to an adjacent question: How would a compute_metrics function look like that can also report the relative change perplexity respectively train_loss? I would be super grateful if anyone could provide a little guidance!