Problem with EarlyStoppingCallback

I set the early stopping callback in my trainer as follows:

trainer = MyTrainer(
        model=model,
        args=training_args,
        train_dataset=train_dataset,
        eval_dataset=eval_dataset,
        compute_metrics=compute_metrics,
        callbacks=[EarlyStoppingCallback(3, 0.0)]
    )

the values for this callback in the TrainingArguments are as follows:

load_best_model_at_end=True, 
metric_for_best_model=eval_loss, 
greater_is_better=False

What I expect is that the training will continue as long as the eval_loss metric continues to drop. While the training will stop only when the eval_loss has not dropped for more than 3 epochs and the best model will be loaded.
During the training I get these values for the eval_loss:

epoch1: 'eval_loss': 0.8832499384880066
epoch2: 'eval_loss': 0.6109879612922668
epoch3: 'eval_loss': 0.52149897813797
epoch4: 'eval_loss': 0.48024266958236694

therefore, as it always drops, I would expect the training to continue. Instead the training stopped after 4 epochs and during the evaluation it uploaded the model related to the first epoch, where the eval_loss had the greatest value, as you can see in the following:

01/26/2021 11:08:57 - INFO - __main__ -  ***** Eval results *****
01/26/2021 11:08:57 - INFO - __main__ -    eval_loss = 0.8832499384880066

Am I wrong to set some parameters?
Thanks! :slight_smile:

EDIT: to clarify, I also printed the TrainerState values at the end of the training:

log_history=[
{'eval_loss': 0.837020993232727, 'eval_accuracy_score': 0.8039973127309372, 'eval_precision': 0.7904381747255738, 'eval_recall': 0.7808047316067748, 'eval_f1': 0.7855919213776935, 'eval_runtime': 8.375, 'eval_samples_per_second': 67.343, 'epoch': 1.0, 'step': 411}, {'loss': 1.5377, 'learning_rate': 4.6958980235865466e-05, 'epoch': 1.22, 'step': 500}, 
{'eval_loss': 0.6051444411277771, 'eval_accuracy_score': 0.8406953308700034, 'eval_precision': 0.8297104717236403, 'eval_recall': 0.8243570212384622, 'eval_f1': 0.8270250831610176, 'eval_runtime': 8.3919, 'eval_samples_per_second': 67.208, 'epoch': 2.0, 'step': 822}, {'loss': 0.6285, 'learning_rate': 4.3917595505563304e-05, 'epoch': 2.43, 'step': 1000}, 
{'eval_loss': 0.5184187889099121, 'eval_accuracy_score': 0.856567013772254, 'eval_precision': 0.8464932024849194, 'eval_recall': 0.8425486154673358, 'eval_f1': 0.8445163028833199, 'eval_runtime': 8.4159, 'eval_samples_per_second': 67.016, 'epoch': 3.0, 'step': 1233}, {'loss': 0.4561, 'learning_rate': 4.087621077526113e-05, 'epoch': 3.65, 'step': 1500}, 
{'eval_loss': 0.46523478627204895, 'eval_accuracy_score': 0.868743701713134, 'eval_precision': 0.8599369085173502, 'eval_recall': 0.8550049287570571, 'eval_f1': 0.8574638267277793, 'eval_runtime': 8.3682, 'eval_samples_per_second': 67.398, 'epoch': 4.0, 'step': 1644}, {'train_runtime': 1783.4323, 'train_samples_per_second': 4.609, 'epoch': 4.0, 'step': 1644}
], 
best_metric=0.837020993232727

as you can also see from here, the best_metric is the value of the val_loss of the first epoch and not the lowest among the epochs it has done (which are still few because the value is always decreasing and therefore the training should not even stop ā€¦).

1 Like

Iā€™m trying to reproduce your issue, but on my side, the best_metric is correct and decreasing. Could you check you are using the latest version of Transformers and post the way you are creating your TrainingArguments?

Iā€™m using version 4.2.0 of Transformers.

For the TrainingArguments, Iā€™m using run_ner as a starting script where this function is used to take arguments:

parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
model_args, data_args, training_args = parser.parse_args_into_dataclasses()

The value of TrainingArguments that I modify I pass them in input through the script .sh in this way:

export MAX_LENGTH=200
export BERT_MODEL=bert-base-uncased
export OUTPUT_DIR=transformers
export BATCH_SIZE=32
export NUM_EPOCHS=3
export SAVE_STEPS=500
export SEED=1

python3 run_ner.py \
--task_type POS \
--data_dir . \
--model_name_or_path $BERT_MODEL \
--output_dir $OUTPUT_DIR \
--max_seq_length  $MAX_LENGTH \
--num_train_epochs $NUM_EPOCHS \
--per_device_train_batch_size $BATCH_SIZE \
--save_steps $SAVE_STEPS \
--seed $SEED \
--load_best_model_at_end \
--evaluation_strategy epoch \
--metric_for_best_model eval_loss \
--greater_is_better False \
--disable_tqdm False \
--save_total_limit 2 \
--do_train \
--do_eval \
--do_predict

Ah yes, this comes from a bug in the argument parser that will be fixed by this PR. Basically greater_is_better is stored as a string and not a bool, so the tests using it donā€™t give the right results.

Since you are using eval_loss, it will default to False if you donā€™t say anything, so a workaround while you wait for the fix is to remove --greater_is_better False \ in your command.

1 Like

Ah, ok thanks a lot! Iā€™ll keep an eye on the PR.

So, vice versa, if the metric was eval_accuracy should I use --greater_is_better with the string true or with the boolean True? Or would it not work either way until the PR is approved?

It wonā€™t work until the PR is merged either. But it will also default to the right value, so you wonā€™t need to set it :wink:

2 Likes

Perfect, thank you so much for the help and great work you are doing for the community! :grin:

1 Like

Hello, I have a similar problem. I used metric_for_best_model = eval_f1 for the model. But EarlyStopping stops the model even when f1 score is increasing. I did not include greater_is_better in my training arguments. Should I include it or not?

training_args = TrainingArguments(

f'./clickbait_identification/{dir_path}',

evaluation_strategy = "steps",

eval_steps=1000,

save_strategy = 'steps',

save_steps =1000,

learning_rate=2e-5,

per_device_train_batch_size=batch_size,

per_device_eval_batch_size=batch_size,

num_train_epochs=20,

weight_decay=0.01,

lr_scheduler_type='linear',

load_best_model_at_end=True,

metric_for_best_model='eval_f1',

logging_strategy ='epoch',

group_by_length=True,

seed=42

)

@Motahar , actually, your F1 was not increasing: since logging steps == 3000 it could not increase for 3 epochs. Hence, since you (probably) set early_stpping_patience=3, the training was interrupted.

But in step 6000, F1 score improved. It went down in 4000,5000 but increased in step 6000. It consecutively went down for 2 epochs( Here I am assuming 2epoch means 2 logging steps) but not 3.

Was looking at EarlyStoppingCallbacks, Iā€™ve found some quirks that might be a feature/bug where the patience kick in only after the first save_state is met, as documented on [Maybe Bug] When using EarlyStopping Callbacks with Seq2SeqTraininer, training didn't stop

Posting the comment here, just in case anyone else found this post and had the found similar quirks when using early stopping.

Hi,

I am facing a similar problem right now. Even though the eval_loss drops, the trainer does not stop training. Here is the trainer:

trainer = Trainer(
model=model,
train_dataset=train_dataset,
eval_dataset=val_dataset,
args=TrainingArguments(
num_train_epochs=5,
per_device_train_batch_size=8,
gradient_accumulation_steps=32,
warmup_steps=2,
learning_rate=2e-4,
fp16=True,
logging_steps=1,
output_dir=ā€œoutputsā€,
optim=ā€œpaged_adamw_8bitā€,
load_best_model_at_end = True,
evaluation_strategy = ā€˜stepsā€™,
metric_for_best_model=ā€˜eval_lossā€™,
save_strategy=ā€˜stepsā€™,
),
data_collator=DataCollatorForLanguageModeling(tokenizer, mlm=False),
callbacks = [EarlyStoppingCallback(early_stopping_patience = 1, early_stopping_threshold = 0.0)]
)

And here is the loss values during some part of the training:

{ā€˜lossā€™: 0.0488, ā€˜learning_rateā€™: 9.150326797385621e-06, ā€˜epochā€™: 4.74}
{ā€˜eval_lossā€™: 0.2025810033082962, ā€˜eval_runtimeā€™: 1.6612, ā€˜eval_samples_per_secondā€™: 8.428, ā€˜eval_steps_per_secondā€™: 1.204, ā€˜epochā€™: 4.74}

{ā€˜lossā€™: 0.0564, ā€˜learning_rateā€™: 7.84313725490196e-06, ā€˜epochā€™: 4.77}
{ā€˜eval_lossā€™: 0.20260831713676453, ā€˜eval_runtimeā€™: 1.6612, ā€˜eval_samples_per_secondā€™: 8.427, ā€˜eval_steps_per_secondā€™: 1.204, ā€˜epochā€™: 4.77}

{ā€˜lossā€™: 0.052, ā€˜learning_rateā€™: 6.535947712418301e-06, ā€˜epochā€™: 4.8}
{ā€˜eval_lossā€™: 0.20264378190040588, ā€˜eval_runtimeā€™: 1.6613, ā€˜eval_samples_per_secondā€™: 8.427, ā€˜eval_steps_per_secondā€™: 1.204, ā€˜epochā€™: 4.8}

I was expecting the trainer would stop when ā€˜eval_lossā€™: 0.20260831713676453.

Could you tell me what I am missing?

Thank you

1 Like

I am facing similar issues where my training doesnā€™t stop after early_stopping_patience number of steps.

A bit of a late post because I faced this error and found a fix. In the documentation it states that you have to set save_steps at same value as eval_steps in your Training arguments logic. I set my model to save at number of steps corresponding to the end of every epoch with a code save_steps=len(dataset)//batch_size, this gives me number of steps per epoch. This number has to be same as eval_steps which represents the number of steps at which an evaluation on the evaluation dataset happens. That way the early stopping will kick in at right time. If not the model will keep running until it gets to the next save_steps point specified at which it continues running if evaluation does not take place at that point. docs link here:Callbacks

2 Likes