How do i get Training and Validation Loss during fine tuning

Hi,

The trained API doesn’t seem to throw training loss.
Only shows the validation loss.
Could some one show me why this is the case?

Here is my code snippet.

def compute_metrics(eval_pred):
    predictions, labels = eval_pred
    predictions = predictions[:, 0]
    return metric.compute(predictions=predictions, references=labels)

#Training Arguments
args = TrainingArguments(
    output_dir = "/Content/mod",
    evaluation_strategy = "epoch", #Can be epoch or steps
    learning_rate=2e-5, #According to original bert paper
    per_device_train_batch_size=32, #According to original bert paper
    per_device_eval_batch_size=32,
    num_train_epochs=3, #should be inbetween 2-4 according to the paper
    weight_decay=0.01,
    prediction_loss_only = True
)


#Initializing Data Collator: this is for dynamic padding of tokens
#Helps the training cycle to be quicker according to Hugging face Dcumentations
data_collator_ = DataCollatorWithPadding(tokenizer=tokenizer)

#Trainer itself.
trainer = Trainer(
    model,
    args,
    train_dataset=tokenized_datasets_train,
    eval_dataset=tokenized_datasets_val,
    tokenizer=tokenizer,
    compute_metrics=compute_metrics,
    data_collator = data_collator_
)

torch.cuda.empty_cache() #This line is unnecessary but i still kept it cuz why not.
trainer.train()
1 Like

For example if you use evaluation_strategy="steps" and eval_steps=2000 in the TrainingArguments, you will get training and validation loss for every 2000 steps. If you wanna do it on an epoch level I think you need to set evaluation_strategy="epoch" and logging_strategy="epoch" in the TrainingArguments class.

3 Likes

Thank you that worked!!

2 Likes