I am using the Seq2SeqTrainer and pass an datasets.arrow_dataset.Dataset as train_dataset when initiating the object. Is the dataset by default shuffled per epoch? If not, how to make it shuffled?
The Seq2SeqTrainer (as well as the standard Trainer) uses a PyTorch Sampler to shuffle the dataset. At each epoch, it does shuffle the dataset and it also groups the samples of roughly the same length size. You can find the Sampler definition here.
Hi, Is there a parameters that controls whether or not the data get reshuffled before each epoch? And whether or not it is grouped by length? Thanks!
Additionally, if the training is aborted and I’m restarting from a checkpoint - does the checkpoint have information about the shuffling order for this given epoch and which datapoints still haven’t gone through this epoch already? Thanks!
Hi Sgugger, why is it a bad practice to reshuffle the dataset at every epoch?
I thought reshuffle the dataset at every epoch can reduce overfitting and improve the generalization performance of the model. By shuffling the dataset, we ensure that the model is exposed to a different sequence of samples in each epoch, which can help to prevent it from memorizing the order of the training data and overfitting to specific patterns.
Shuffling the dataset also helps to improve the diversity of the mini-batches during training, which can improve the robustness of the model and make it more resistant to outliers or noise in the data.
But why? I thought this would print out the dataset for each epoch, but it prints out the same dataset every time (the shuffled dataset for the first epoch). What did I do wrong?
class LogFirstSamplesCallback(TrainerCallback):
def on_epoch_begin(self, args, state, control, **kwargs):
train_dataloader = kwargs.get("train_dataloader")
dataset = train_dataloader.dataset
print(f"\n🌀 Epoch {int(state.epoch)+1} starts – showing first 2 samples:")
for i in range(2):
sample = dataset[i]
text = tokenizer.decode(sample['input_ids'], skip_special_tokens=True)
print(f"\nSample {i+1}:\n{text}\n")
trainer.add_callback(LogFirstSamplesCallback())
At the same time, I noticed that when I resume training with my previously trained model and start a new round of training, the results are actually better than setting a higher number of epochs or applying various warmup strategies.
I’m not sure if this is because the new training starts with a larger learning rate, allowing it to find a lower loss surface again, or if it’s because each time training is restarted, the data is reshuffled.