Hugging Face Forums
Can I load a model fine-tuned with LoRA 4-bit quantization as an 8-bit model?
🤗Hub
supercoolaj
November 27, 2023, 7:02am
1
Or do I have to load it as a 4-bit each time? Thanks.
Related topics
Topic
Replies
Views
Activity
Fine tuning LoRa merge
Beginners
0
303
January 10, 2024
Loading quantised weights does not work
Beginners
0
122
April 12, 2024
qloRA with cpu offload
🤗Transformers
1
882
February 22, 2024
Bitsandbytes quantization and QLORA fine-tuning
🤗Transformers
1
75
November 5, 2024
Error. Model cannot be quantized if a LoRA adapter has been applied to it via merge_and_unload()
Beginners
0
232
May 12, 2024