When using `auto_find_batch_size` and a new batch size is used, output seems to indicate training examples are left off from before. Not the case?

I’m looking at the accelerate find batch size code

Which is used in hf trainer.

It seems that if a batch size fails, it reduced the batch size by half and returns a brand new _inner_training_loop with the new batch size, and thus a new dataloader is instantiated.

From the dataloader instantiation, it seems that nothing is being passed to indicate that the training samples are being continued from before

But from the training output, it seems to indicate that it is continuing from where it left off.