Trainer log my custom metrics at training step

If you don’t use gradient accumulation, then I usually just hack by overwriting Trainer.compute_loss and tucking in one line of self.log(compute_my_metric(output)

If you use gradient accumulation, one alternative is to trigger a CustomCallback per Metrics for Training Set in Trainer - #7 by Kaveri. For example, you can do one forward pass on the entire train set on_epoch_end or on_evaluate. It will be repeated work, slow and coarse.

And let me know if you figured out an easy way to log custom loss!

2 Likes