I cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wr
|
|
1
|
131
|
May 1, 2024
|
Model Parallism
|
|
0
|
181
|
April 21, 2024
|
What should I do if I want to use model from DeepSpeed
|
|
5
|
1616
|
April 6, 2024
|
[Maybe Bug] When using EarlyStopping Callbacks with Seq2SeqTraininer, training didn't stop
|
|
3
|
1485
|
April 4, 2024
|
ncclSystemError: System call (e.g. socket, malloc) or external library call failed or device error
|
|
0
|
1120
|
March 30, 2024
|
Deepspeed zero-2 cpu offloading killing process = -9 error
|
|
1
|
1690
|
March 17, 2024
|
Conceptual question: Early loading of the model defeats the purpose of deepspeed!
|
|
0
|
157
|
March 14, 2024
|
Struggle with finetuneing flan-t5-xxl using deepspeed
|
|
3
|
837
|
March 12, 2024
|
Deepspeed inference stage 3 + quantization
|
|
0
|
929
|
March 8, 2024
|
Saving checkpoint is too slow with deepspeed
|
|
5
|
2638
|
March 6, 2024
|
Deepspeed trainer and custom loss weights
|
|
1
|
547
|
February 28, 2024
|
How can I use Inference API with my model?
|
|
0
|
146
|
February 24, 2024
|
Finetune LLM with DeepSpeed
|
|
2
|
5050
|
February 22, 2024
|
DeepSpeed integration for HuggingFace Seq2SeqTrainingArguments
|
|
0
|
1419
|
February 22, 2024
|
It says that `bfloat16.enabled` without `auto' needed to be specified when training T5, is anyone aware of how to solve that?
|
|
0
|
250
|
February 20, 2024
|
Exact difference between Transformers' and Accelerate's DeepSpeed integrations?
|
|
5
|
763
|
February 13, 2024
|
How to use GPU when using transformers.AutoModel
|
|
0
|
1585
|
February 3, 2024
|
Multi GPU training - Model parallelism
|
|
1
|
1836
|
February 2, 2024
|
More processes than GPUs with DeepSpeed launcher
|
|
0
|
223
|
January 25, 2024
|
Rewrite trainer's save_model method get unexpected pytorch_model.bin file
|
|
0
|
383
|
January 8, 2024
|
Model (Pipeline) Parallelism in SLURM cluster
|
|
0
|
239
|
January 6, 2024
|
Mixtral bad FP16 performance
|
|
0
|
511
|
January 3, 2024
|
Deepspeed script launcher vs accelerate script launcher for TRL
|
|
0
|
356
|
December 25, 2023
|
Best practice to run DeepSpeed
|
|
2
|
1536
|
December 25, 2023
|
Infrence time increase when using multi-GPU
|
|
1
|
879
|
November 28, 2023
|
Resume_from_checkpoint does not configure learning rate scheduler correctly
|
|
3
|
881
|
November 28, 2023
|
What does LoRA do to model by default?
|
|
0
|
531
|
November 21, 2023
|
The same hyperparameters with deepspeed is worse than without deepseepd
|
|
2
|
440
|
November 13, 2023
|
Deepspeed stage 3 partition
|
|
0
|
587
|
October 31, 2023
|
Unable to train model (Loss is 0.000000)
|
|
2
|
1069
|
October 17, 2023
|