I have a basic problem, but I can’t find a solution. I looked into some older threads saying that it has something to do with the number of eval_steps and gradient accumulation, but this doesn’t seem to be helping.
I want to see the train_loss and eva_loss at every X steps.
Here’s my code:
training_args = TrainingArguments(
lr_scheduler_type = ‘cosine_with_restarts’,
load_best_model_at_end = True,
trainer = Trainer(
data_collator = data_collator,
callbacks = [transformers.EarlyStoppingCallback(early_stopping_patience=10)]
If I do : log_history = trainer.state.log_history
at the end of the run, then the log_history shows just fine. I don’t know why it’s not printing the loss to the console. The only thing I see is when it goes through eval, but not the actual eval error (or train loss).