When I run a âtrainer.hyperparameter_search()â with training arguments âevaluation_strategy=epochâ and âsave_strategy=noâ, then no checkpoints are saved as expected. When I change the evaluation strategy to âstepsâ this invokes a âmodelCheckpointâ callback which saves a checkpoint even with âsave_strategy=noâ.
Iâm not sure if I am confused about evaluations but why does evaluation need a saved checkpoint when evaluating across steps but not when on epochs?