I tried to follow the guidelines from https://huggingface.co/docs/peft/task_guides/token-classification-lora. It worked fine on my single GPU PC, but when I tried to run the code on my multi-GPU PC, I encountered a CUDA error as shown below:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument index in method wrapper_CUDA__index_select)
I have already checked all of the model layers assigned to cuda:0 .
I’m wondering if the pert module does not support multi-GPUs with the Trainer module, or if there is a way to fix this issue. Thank you.