Does "resume_from_checkpoint" work?

From the documentation it seems that resume_from_checkpoint will continue training the model from the last checkpoint. But when I call trainer.train() it seems to delete the last checkpoint and start a new one:

Saving model checkpoint to ./results_distilbert-base-uncased/checkpoint-500
...
Deleting older checkpoint [results_distilbert-base-uncased/checkpoint-5000] due to args.save_total_limit

Does it really continue training from the last checkpoint (i.e., 5000) and just starts the count of the new checkpoint at 0 (saves the first after 500 steps – “checkpoint-500”), or does it simply not continue the training?