Hugging Face Forums
Proper way to gather output from accelerate multi-gpu inference
Beginners
xxhasacat
November 7, 2023, 2:53pm
2
I also encountered this issue… Could anyone give a solution?
show post in topic
Related topics
Topic
Replies
Views
Activity
Distributed inference: how to store results in a global variable
🤗Accelerate
2
38
October 16, 2024
How can I use multi-GPU inference for my LlamaForCausalLM model?
🤗Accelerate
2
1520
April 15, 2024
Distributed Inference with 🤗 Accelerate - Compare Baseline vs Fine Tuned Model
🤗Accelerate
3
540
January 30, 2024
Using `torch.distributed.all_gather_object` returns error when using 1 GPU but works fine for multiple GPUs
🤗Accelerate
3
2936
July 5, 2023
Is it possible that Accelerate may not divide the data evenly among processes?
🤗Accelerate
3
1084
July 5, 2023