I am wondering, what would be the optimal solution to also report and log
perplexity during the training loop via the
Trainer API. How would the corresponding
compute_metrics function look like. So far I tried without success since I am not 100% sure how the
EvalPrediction output would look like.
Thanks in advance
This brings me to an adjacent question: How would a
compute_metrics function look like that can also report the relative change
train_loss? I would be super grateful if anyone could provide a little guidance!
Hi, I am also looking very forward to see how it is possible to utilize perplexity during execution oc compute_metrixs() while finetuning pretrained models for MLM or CLM