Hugging Face Forums
Using `torch.distributed.all_gather_object` returns error when using 1 GPU but works fine for multiple GPUs
🤗Accelerate
muellerzr
July 5, 2023, 2:17am
3
accelerator.gather()
is guarenteed to always work even on single node
1 Like
show post in topic
Related topics
Topic
Replies
Views
Activity
Proper way to gather output from accelerate multi-gpu inference
Beginners
1
712
November 7, 2023
No GPUs found in distributed mode
🤗Accelerate
0
939
March 1, 2023
Why my Accelerate just doesn't work?
🤗Accelerate
2
6245
March 7, 2022
Is it possible that Accelerate may not divide the data evenly among processes?
🤗Accelerate
3
1052
July 5, 2023
Bug on multi-gpu trainer with accelerate
🤗Accelerate
6
515
February 18, 2025