Logs of training and validation loss

I know this is late as hell, but I will leave this here for future reference and if anyone comes across this post. I think that even less cowboy way would be to use callback:

class LogCallback(transformers.TrainerCallback):
    def on_evaluate(self, args, state, control, **kwargs):
        # calculate loss here

trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=train_dataset,
    eval_dataset=valid_dataset,
    compute_metrics=compute_metrics,
    callbacks=[LogCallback],
)

Another even less cowboy way (without implementing anything) is that when you use those logging_steps args etc. You can access those logs after training is complete:

trainer.state.log_history

You should have metrics and losses from all steps over training. Hope this will help someone in future.

22 Likes