About the DeepSpeed category
|
|
1
|
789
|
October 30, 2021
|
Corrupted deepspeed checkpoint
|
|
1
|
17
|
March 13, 2025
|
SFTTrainer Doubling Speed on a Single GPU with DeepSpeed: Proposal for an Update to the Official Documentation and Verification Report
|
|
1
|
21
|
March 7, 2025
|
Accelerator.backward freeze
|
|
1
|
21
|
February 24, 2025
|
Deepspeed ZeRO-3 flattens convolution that causes runtime error
|
|
0
|
50
|
February 17, 2025
|
Is there a way to terminate llm.generate and release the GPU memory for next prompt?
|
|
1
|
59
|
February 4, 2025
|
Timeout Issue with DeepSpeed on Multiple GPUs
|
|
1
|
270
|
January 3, 2025
|
CUDA OOM on first backward pass after evaluation
|
|
0
|
194
|
November 20, 2024
|
Different metrics score between when training and when merge lora adapter testing
|
|
1
|
77
|
October 25, 2024
|
Trainer leaked memory?
|
|
1
|
738
|
October 15, 2024
|
DeepSpeed MII pipeline issue
|
|
1
|
30
|
September 30, 2024
|
Deepspeed mii library issues
|
|
1
|
54
|
September 29, 2024
|
Calculate tokens per second while fine-tuning llm?
|
|
0
|
98
|
September 17, 2024
|
Fitting huge models on multiple nodes
|
|
0
|
106
|
September 6, 2024
|
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:2 and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
|
|
5
|
3336
|
August 26, 2024
|
AutoTrain Error DeepSpeed Zero-3
|
|
1
|
223
|
August 21, 2024
|
DeepSpeed Zero 3 with LoRA - Merging adapters
|
|
1
|
481
|
August 16, 2024
|
LoRA training with accelerate / deepspeed
|
|
2
|
1935
|
August 8, 2024
|
DeepSpeed error: a leaf Variable that requires grad is being used in an in-place operation
|
|
1
|
63
|
July 26, 2024
|
Running model.generate() in deep speed training
|
|
2
|
496
|
July 25, 2024
|
RuntimeError: Error building extension 'cpu_adam'
|
|
4
|
5051
|
July 23, 2024
|
Saving checkpoints when using DeepSpeed is taking abnormally long
|
|
0
|
135
|
July 22, 2024
|
GPU memory usage of optimizer's states when using LoRA
|
|
4
|
590
|
July 5, 2024
|
Saving weights while finetuning is on
|
|
0
|
91
|
June 13, 2024
|
Deepspeed ZeRO2, PEFT, bitsnbytes training
|
|
0
|
116
|
June 4, 2024
|
Codellama will not stop generating at EOS
|
|
1
|
557
|
June 2, 2024
|
CUDA OOM error when `ignore_mismatched_sizes` is enabled
|
|
0
|
183
|
May 31, 2024
|
Why activations memory is computed through an experiment rather formulating it for DeepSpeed autotuner
|
|
0
|
81
|
May 6, 2024
|
I cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wr
|
|
1
|
129
|
May 1, 2024
|
Model Parallism
|
|
0
|
181
|
April 21, 2024
|