Hello!
I am using gradient accumulation to simulate bigger batches when fine-tuning. However, I remember to have seen some notebooks in the documentation where they would make N copies of the data when N is the number of gradient accumulation steps. I do not understand why this should be done. Is this good practice? Why?
Thank you
Could you link to the exact notebook where you have seen this?
Hey @BramVanroy, thank you for your reply. I have found the notebook, sorry for not being very precise. It’s Reformer - Pushing the Limits of Language Modeling. Around box 7 it says:
We then expand the same sample to 8 training samples so that we can accumulate gradients during training.
In the code
# duplicate data 8 times to have have 8 examples in dataset
for key in input_ids_dict.keys():
input_ids_dict[key] = [8 * [x] for x in input_ids_dict[key]][0]
And gradient accumulation steps is actually 4, not 8 as I would expect. With batch size 1.
Hm, good question. Tagging @patrickvonplaten who created that notebook.
Yes, would be helpful to have an update from you @patrickvonplaten
Ideally gradient accumulation has nothing to do with data . It’s basically , in storage memory of few epochs and then do gradient update, which will have an effect of larger batch size.
1 Like
Yeah this might have been a bit unprecise in the notebook. So the reason I’m expanding the training data from 1 to 8 samples is a super edge-case. Since Reformer processes the whole train dataset in 1 batch, there is only one data sample in the whole dataset. Then if one uses gradient_accumulation (which as pointed out correctly has nothing to do with data replication) there is a bug if the data set is of size 1 because the training script rightfully expects the dataset to at least have > 1 training samples when gradient_accumulation is used. So my solution of expanding the dataset is more of a hack than the recommended way of doing it (actually one should never copy samples from the dataset).
I doubt there is any real application of having a dataset of batch 1 => this notebook was more of a show-off that Reformer can process the whole dataset in 1 batch, so not super relevant for real scenarios.
This is clear now, thank you!