How to fine-tune with unsloth using multiple GPUs as I'm getting out-of-memory error after running os.environ["CUDA_VISIBLE_DEVICES"]

0

I was trying to fine-tune Llama 70b on 4 GPUs using unsloth.

I was able to bypass the multiple GPUs detection by coda by running this command. It only detects 1 GPUs. os.environ[“CUDA_VISIBLE_DEVICES”]=“0”

However, when I was try to run fine-tune, the “trainer_starts = trainer.train()” threw CUDA out of memory error. It was taking only GPU 0 into the memory estimation → which is not enough for the fine-tuning.

How can we by pass this? Or is there another trick for this?

1 Like

I’ve never dealt with multi-GPU environments…

1 Like

Thanks a lot for the references @John6666

1 Like