<torch.utils.data.dataloader.DataLoader object at 0x7f987fbc4e50> <utils.imagenet_dataloader.RASampler object at 0x7f987fbc4e20>
<accelerate.data_loader.DataLoaderShard object at 0x7f98518b6da0> <torch.utils.data.sampler.SequentialSampler object at 0x7f98518b6b60>
This means that my RASampler got turned into a SequentialSampler.
Is this a normal behaviour? Since it seems I can’t manually restore my sampler afterwhile, this is quite a problem.
Could you tell me how to solve this problem?
I wanted to use a custom sampler with my dataloader. Will the sampler behaviour remain the same after passing through accelerate.prepare() ? Or will it be changed to a SequentialSampler() ?
As mentioned, the custom sampler will be used. This new sampler is simply just distributing all of the batch across your GPUs. So it goes old sampler → new sampler to dispatch.
To test this, you can try including a print statement in your custom sampler and iterate after prepare
Hi! Is there a way to get the state of the custom sampler after .prepare() method was used?
I would like to save the state of the custom sampler, but I am not sure if it’s possible?