Hi folks,
When I am running a lot of quick and dirty experiments, I do not want / need to save any models as I’m usually relying on the metrics from the Trainer
to guide my next decision.
One thing that slows down my iteration speed is the fact that the Trainer will save a checkpoint after some number of steps, defined by the save_steps
parameter in TrainingArguments
.
To disable checkpointing, what I currently do is set save_steps
to some large number, but my question is whether there is a more elegant way to do this? For example, is there a Trainer argument I can set that will disable checkpointing altogether?
Thanks!