After finished fine tuning with model using LoRA, i need to merge adapter and model. If I use a 4bit bitsandbytes configuration I have to merge the adapter with the same model 4bit or with the full model ?
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Can I load a model fine-tuned with LoRA 4-bit quantization as an 8-bit model? | 0 | 286 | November 27, 2023 | |
Looking for exploratory study / best practices for LoRA adapters config (LLM fine-tuning) | 0 | 366 | April 15, 2024 | |
How to merge multiple LoRA back to base model? | 0 | 666 | December 14, 2023 | |
Using LoRA Adapters | 0 | 2107 | January 24, 2024 | |
How to merge LoRa weights with base model? | 0 | 1310 | May 3, 2023 |