Impact of resuming from a checkpoint vs training/finetuning from the start

I had a doubt about the impact resuming fine-tuning/training from checkpoints might have.

Is there anything that resets like the optimizer state, or perhaps some other paramter in the case of resuming training from a checkpoint instead of a model fine-tuning continuously without interruptions? I wanted to figure out the impact of resuming training vs keeping it going from scratch - is it equivalent or does it impact something in the model updates?

I’m using the Trainer from transformers where I’m setting the save_steps param to save checkpoints at fixed intervals.