Do you have an answer/clarification for this? I have similar confusion - I am fine-tuning Llama 3.1 with QLoRA and am unable to load the model to have tensor.dtype=torch.bfloat16
1 Like
Do you have an answer/clarification for this? I have similar confusion - I am fine-tuning Llama 3.1 with QLoRA and am unable to load the model to have tensor.dtype=torch.bfloat16