Getting OOM during full-finetuning on kaggle T4s. Help please. Beginner here

Is there no other way than increasing computation power when we get OOMs? Is Lora, qlora the only way.
I’m pretty sure many must have faced this problem, what other ways other than trying qlora/lora, deepspeed, mixed-precision training, are there if we get OOMs during trying for full-finetuning?

1 Like

The first thing that comes to mind is gradient accumulation…

1 Like

This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.