Learning rate and checkpoints

I’m using the script run_traslation to train a T5-base, saving checkpoints at each epoch. However, when I continue the training starting from the checkpoint, the learning rate does not “continue” from the value it had at the end of the previous epoch, but starts from a higher value and this entails an initially higher train loss. Why does this happen? Am I doing something wrong?


1 Like