Hugging Face Forums
LM example run_clm.py isn't distributing data across multiple GPUs as expected
🤗Transformers
brando
August 17, 2022, 3:03pm
6
does this solve your question:
Using Transformers with DistributedDataParallel — any examples?
How to run an end to end example of distributed data parallel with hugging face's trainer api (ideally on a single node multiple gpus)?
show post in topic
Related topics
Topic
Replies
Views
Activity
Multi gpu not working
🤗Transformers
2
2120
February 3, 2023
Distributed training large models on cloud resources
Beginners
6
478
March 27, 2024
Error making predictions using LMM (LLaVA) model on multiple GPUs
Intermediate
0
495
March 27, 2024
Model not copied to multiple GPUs when using DDP (using trainer)
🤗Accelerate
2
602
February 5, 2024
Multi-GPU LLM inference data parallelism (llama)
Beginners
1
12218
October 25, 2023