Hugging Face Forums
Can I load a model fine-tuned with LoRA 4-bit quantization as an 8-bit model?
🤗Hub
supercoolaj
November 27, 2023, 7:02am
1
Or do I have to load it as a 4-bit each time? Thanks.
Related Topics
Topic
Replies
Views
Activity
Loading quantised weights does not work
Beginners
0
54
April 12, 2024
Loading an LoRA adapter trained on quantized model on a non-quantized model
Intermediate
0
778
November 7, 2023
8 bit precision error
Models
0
177
March 30, 2024
Does loading in 4bit override an 8bit model?
🤗Transformers
0
440
October 20, 2023
Parameter Count & Shape Discrepancies in 4-bit vs. Higher bit LLM models
🤗Transformers
0
219
January 17, 2024