About the DeepSpeed category
|
|
1
|
789
|
October 30, 2021
|
Corrupted deepspeed checkpoint
|
|
1
|
40
|
March 13, 2025
|
SFTTrainer Doubling Speed on a Single GPU with DeepSpeed: Proposal for an Update to the Official Documentation and Verification Report
|
|
1
|
32
|
March 7, 2025
|
Accelerator.backward freeze
|
|
1
|
24
|
February 24, 2025
|
Deepspeed ZeRO-3 flattens convolution that causes runtime error
|
|
0
|
75
|
February 17, 2025
|
Is there a way to terminate llm.generate and release the GPU memory for next prompt?
|
|
1
|
83
|
February 4, 2025
|
Timeout Issue with DeepSpeed on Multiple GPUs
|
|
1
|
334
|
January 3, 2025
|
CUDA OOM on first backward pass after evaluation
|
|
0
|
207
|
November 20, 2024
|
Different metrics score between when training and when merge lora adapter testing
|
|
1
|
91
|
October 25, 2024
|
Trainer leaked memory?
|
|
1
|
746
|
October 15, 2024
|
DeepSpeed MII pipeline issue
|
|
1
|
31
|
September 30, 2024
|
Deepspeed mii library issues
|
|
1
|
58
|
September 29, 2024
|
Calculate tokens per second while fine-tuning llm?
|
|
0
|
108
|
September 17, 2024
|
Fitting huge models on multiple nodes
|
|
0
|
116
|
September 6, 2024
|
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:2 and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
|
|
5
|
3373
|
August 26, 2024
|
AutoTrain Error DeepSpeed Zero-3
|
|
1
|
234
|
August 21, 2024
|
DeepSpeed Zero 3 with LoRA - Merging adapters
|
|
1
|
525
|
August 16, 2024
|
LoRA training with accelerate / deepspeed
|
|
2
|
2044
|
August 8, 2024
|
DeepSpeed error: a leaf Variable that requires grad is being used in an in-place operation
|
|
1
|
71
|
July 26, 2024
|
Running model.generate() in deep speed training
|
|
2
|
508
|
July 25, 2024
|
RuntimeError: Error building extension 'cpu_adam'
|
|
4
|
5097
|
July 23, 2024
|
Saving checkpoints when using DeepSpeed is taking abnormally long
|
|
0
|
148
|
July 22, 2024
|
GPU memory usage of optimizer's states when using LoRA
|
|
4
|
619
|
July 5, 2024
|
Saving weights while finetuning is on
|
|
0
|
93
|
June 13, 2024
|
Deepspeed ZeRO2, PEFT, bitsnbytes training
|
|
0
|
117
|
June 4, 2024
|
Codellama will not stop generating at EOS
|
|
1
|
563
|
June 2, 2024
|
CUDA OOM error when `ignore_mismatched_sizes` is enabled
|
|
0
|
188
|
May 31, 2024
|
Why activations memory is computed through an experiment rather formulating it for DeepSpeed autotuner
|
|
0
|
81
|
May 6, 2024
|
I cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wr
|
|
1
|
130
|
May 1, 2024
|
Model Parallism
|
|
0
|
181
|
April 21, 2024
|