Trainer saving checkpoints even when 'save_strategy' is set to 'no'

When I run a ‘trainer.hyperparameter_search()’ with training arguments “evaluation_strategy=epoch” and “save_strategy=no”, then no checkpoints are saved as expected. When I change the evaluation strategy to ‘steps’ this invokes a ‘modelCheckpoint’ callback which saves a checkpoint even with “save_strategy=no”.

I’m not sure if I am confused about evaluations but why does evaluation need a saved checkpoint when evaluating across steps but not when on epochs?

I was silly and was running my ‘eval_step’ size larger than the total steps possible. :confused: