Hugging Face Forums
LM example run_clm.py isn't distributing data across multiple GPUs as expected
🤗Transformers
brando
August 17, 2022, 3:03pm
6
does this solve your question:
Using Transformers with DistributedDataParallel — any examples?
How to run an end to end example of distributed data parallel with hugging face's trainer api (ideally on a single node multiple gpus)?
show post in topic
Related topics
Topic
Replies
Views
Activity
Running a Trainer in DistributedDataParallel mode
🤗Transformers
1
1466
October 24, 2020
Multi gpu training
🤗Transformers
3
6052
April 24, 2022
How to run an end to end example of distributed data parallel with hugging face's trainer api (ideally on a single node multiple gpus)?
Intermediate
17
18185
September 6, 2023
Distribute training
🤗Transformers
0
317
November 16, 2022
Why is Trainer only using 1 (not 4) GPUs?
Beginners
1
1648
June 2, 2022