Trainer does not print to console the loss (train and eval)

Hello,

I have a basic problem, but I can’t find a solution. I looked into some older threads saying that it has something to do with the number of eval_steps and gradient accumulation, but this doesn’t seem to be helping.

I want to see the train_loss and eva_loss at every X steps.

Here’s my code:

transformers.logging.set_verbosity_debug()

training_args = TrainingArguments(
“temp”,
evaluation_strategy=“steps”,
do_train=True,
learning_rate=LEARNING_RATE,
gradient_accumulation_steps=2,
auto_find_batch_size=True,
num_train_epochs=3,
save_steps=100,
eval_steps=100,
lr_scheduler_type = ‘cosine_with_restarts’,
save_total_limit=8,
max_steps=10000,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
load_best_model_at_end = True,
logging_strategy=‘steps’,
logging_dir=‘./logs’,
logging_steps=100,
overwrite_output_dir=True
)

trainer = Trainer(
model=model,
args=training_args,
train_dataset=datasets[‘train’],
eval_dataset=datasets[‘validation’],
data_collator = data_collator,
callbacks = [transformers.EarlyStoppingCallback(early_stopping_patience=10)]
)

trainer.train()

If I do : log_history = trainer.state.log_history
at the end of the run, then the log_history shows just fine. I don’t know why it’s not printing the loss to the console. The only thing I see is when it goes through eval, but not the actual eval error (or train loss).

1 Like