Disable checkpointing in Trainer

Hi folks,

When I am running a lot of quick and dirty experiments, I do not want / need to save any models as I’m usually relying on the metrics from the Trainer to guide my next decision.

One thing that slows down my iteration speed is the fact that the Trainer will save a checkpoint after some number of steps, defined by the save_steps parameter in TrainingArguments.

To disable checkpointing, what I currently do is set save_steps to some large number, but my question is whether there is a more elegant way to do this? For example, is there a Trainer argument I can set that will disable checkpointing altogether?


There is none for now. We could definitely add a save_strategy like the evaluation_strategy that could take the values no/steps/epoch.
If you want to tackle this in a PR that would be a welcome contribution!

1 Like

Thanks for the info! Sure I am happy to tackle this in a PR - will report back when it’s ready :slight_smile:

Is this solved now? Is there any way to disable checkpointing or do we have to set the save_steps to a large number to disable checkpointing even now?

You can use save_strategy="no".

1 Like