Autotrain - Training of Llama3.1 70B

Last week, i trained Llama3.1 70B Instruct via 4xL40S - 192 GB GPU VRAM (which has 8.30 $ per hour) and i was able to train the model last week. (i used default parameters which autotrain gave (2 for batch size, unsloth: false, peft: ture))

However, at the moment, even i try with same settings, i encounter with cuda allocation error. Every bit of setting is same but i can not train it due to cuda error. I have tried reduce batch size, change the mode to base model. It didn’t affect.

Usage for last week:

2 Likes