Using `torch.distributed.all_gather_object` returns error when using 1 GPU but works fine for multiple GPUs

accelerator.gather() is guarenteed to always work even on single node

1 Like