Hugging Face Forums
Issues when fine tuning Llama-3.2-11B-Vision
Beginners
John6666
May 8, 2025, 4:55am
5
And perhaps:
you can use
return_full_text=False
show post in topic
Related topics
Topic
Replies
Views
Activity
Fine tune "meta-llama/Llama-2-7b-hf" Bug:RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument target in method wrapper_CUDA_nll_loss_forward)
Beginners
15
142
December 6, 2024
Repetitive Answers From Fine-Tuned LLM
Models
9
844
March 28, 2025
Fine Tuning LLama 3.2 1B Quantized Memory Requirements
Models
2
1196
January 23, 2025
Fine tune Meta-Llama-3.1-8B OOM error after the 1st training step
Models
0
155
September 6, 2024
Fine-tuning don't work / bad results
Beginners
5
1573
January 15, 2025