Hi there,
I am wondering, what would be the optimal solution to also report and log perplexity
during the training loop via the Trainer
API. How would the corresponding compute_metrics
function look like. So far I tried without success since I am not 100% sure how the EvalPrediction
output would look like.
Thanks in advance
Simon
2 Likes
This brings me to an adjacent question: How would a compute_metrics
function look like that can also report the relative change perplexity
respectively train_loss
? I would be super grateful if anyone could provide a little guidance!
2 Likes
Hi, I am also looking very forward to see how it is possible to utilize perplexity during execution oc compute_metrixs() while finetuning pretrained models for MLM or CLM
1 Like