About the DeepSpeed category
|
|
1
|
777
|
October 30, 2021
|
DeepSpeed error: a leaf Variable that requires grad is being used in an in-place operation
|
|
1
|
1
|
July 26, 2024
|
Running model.generate() in deep speed training
|
|
2
|
391
|
July 25, 2024
|
RuntimeError: Error building extension 'cpu_adam'
|
|
4
|
4369
|
July 23, 2024
|
Saving checkpoints when using DeepSpeed is taking abnormally long
|
|
0
|
11
|
July 22, 2024
|
DeepSpeed Zero 3 with LoRA - Merging adapters
|
|
0
|
100
|
July 5, 2024
|
GPU memory usage of optimizer's states when using LoRA
|
|
4
|
107
|
July 5, 2024
|
Saving weights while finetuning is on
|
|
0
|
80
|
June 13, 2024
|
Deepspeed ZeRO2, PEFT, bitsnbytes training
|
|
0
|
84
|
June 4, 2024
|
Codellama will not stop generating at EOS
|
|
1
|
487
|
June 2, 2024
|
CUDA OOM error when `ignore_mismatched_sizes` is enabled
|
|
0
|
113
|
May 31, 2024
|
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:2 and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
|
|
4
|
2699
|
May 9, 2024
|
Why activations memory is computed through an experiment rather formulating it for DeepSpeed autotuner
|
|
0
|
78
|
May 6, 2024
|
I cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wr
|
|
1
|
103
|
May 1, 2024
|
Model Parallism
|
|
0
|
160
|
April 21, 2024
|
What should I do if I want to use model from DeepSpeed
|
|
5
|
1521
|
April 6, 2024
|
[Maybe Bug] When using EarlyStopping Callbacks with Seq2SeqTraininer, training didn't stop
|
|
3
|
1277
|
April 4, 2024
|
ncclSystemError: System call (e.g. socket, malloc) or external library call failed or device error
|
|
0
|
727
|
March 30, 2024
|
Deepspeed zero-2 cpu offloading killing process = -9 error
|
|
1
|
1148
|
March 17, 2024
|
Conceptual question: Early loading of the model defeats the purpose of deepspeed!
|
|
0
|
149
|
March 14, 2024
|
Struggle with finetuneing flan-t5-xxl using deepspeed
|
|
3
|
746
|
March 12, 2024
|
Deepspeed inference stage 3 + quantization
|
|
0
|
631
|
March 8, 2024
|
Saving checkpoint is too slow with deepspeed
|
|
5
|
1712
|
March 6, 2024
|
Deepspeed trainer and custom loss weights
|
|
1
|
476
|
February 28, 2024
|
How can I use Inference API with my model?
|
|
0
|
143
|
February 24, 2024
|
Finetune LLM with DeepSpeed
|
|
2
|
4458
|
February 22, 2024
|
DeepSpeed integration for HuggingFace Seq2SeqTrainingArguments
|
|
0
|
826
|
February 22, 2024
|
It says that `bfloat16.enabled` without `auto' needed to be specified when training T5, is anyone aware of how to solve that?
|
|
0
|
207
|
February 20, 2024
|
Exact difference between Transformers' and Accelerate's DeepSpeed integrations?
|
|
5
|
607
|
February 13, 2024
|
How to use GPU when using transformers.AutoModel
|
|
0
|
1065
|
February 3, 2024
|