LoRA Finetuning RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0!

If so, it may be an unresolved compatibility issue between accelerate and bitsandbytes?