After finished fine tuning with model using LoRA, i need to merge adapter and model. If I use a 4bit bitsandbytes configuration I have to merge the adapter with the same model 4bit or with the full model ?
Related Topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Can I load a model fine-tuned with LoRA 4-bit quantization as an 8-bit model? | 0 | 268 | November 27, 2023 | |
Looking for exploratory study / best practices for LoRA adapters config (LLM fine-tuning) | 0 | 297 | April 15, 2024 | |
Using LoRA Adapters | 0 | 1701 | January 24, 2024 | |
How to merge LoRa weights with base model? | 0 | 1271 | May 3, 2023 | |
Error. Model cannot be quantized if a LoRA adapter has been applied to it via merge_and_unload() | 0 | 177 | May 12, 2024 |