Hey there.
I’m using the Huggingface Trainer to finetune my model, and use tensorboard to display the mertics.
I find that the trainer only logs the train_loss which is return by the model_ouput. However, I wonder if there is a way for me to have more information logged during the train_step, such as my own loss which is part the trian_loss.
I check the trainer code
def _maybe_log_save_evaluate(self, tr_loss, model, trial, epoch, ignore_keys_for_eval):
if self.control.should_log:
if is_torch_tpu_available():
xm.mark_step()
logs: Dict[str, float] = {}
# all_gather + mean() to get average loss over all processes
tr_loss_scalar = self._nested_gather(tr_loss).mean().item()
# reset tr_loss to zero
tr_loss -= tr_loss
logs["loss"] = round(tr_loss_scalar / (self.state.global_step - self._globalstep_last_logged), 4)
logs["learning_rate"] = self._get_learning_rate()
self._total_loss_scalar += tr_loss_scalar
self._globalstep_last_logged = self.state.global_step
self.store_flos()
self.log(logs)
I think this code has defined the log metrics which I may not have the change to customize my own metric during the training step.
I would like to know if there are some ways to log.
Cheers & Thanks for an tips!