Hugging Face Forums
LoRA Finetuning RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0!
🤗Transformers
John6666
June 16, 2025, 7:00am
2
If so, it may be an unresolved compatibility issue between accelerate and bitsandbytes?
show post in topic
Related topics
Topic
Replies
Views
Activity
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:2 and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
DeepSpeed
5
3441
August 26, 2024
Fine tune "meta-llama/Llama-2-7b-hf" Bug:RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument target in method wrapper_CUDA_nll_loss_forward)
Beginners
15
173
December 6, 2024
Training llama with Lora on multiple GPUs may exist bug
🤗Transformers
10
9456
August 25, 2023
GPU error on LoRA for token classification
🤗Transformers
2
674
June 19, 2023
Got "Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1!" on my custom model
Beginners
3
64
February 17, 2025