After finished fine tuning with model using LoRA, i need to merge adapter and model. If I use a 4bit bitsandbytes configuration I have to merge the adapter with the same model 4bit or with the full model ?
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
How to merge LoRa weights with base model? | 0 | 1311 | May 3, 2023 | |
LLM2VEC QLora Quantization after merge_and_upload() | 0 | 137 | July 25, 2024 | |
Dequantize 4bit B&B model to prepare for merging | 4 | 21 | September 2, 2025 | |
Further finetuning a LoRA finetuned CausalLM Model | 17 | 11014 | July 7, 2024 | |
Do I need to dequantization before merging the qlora | 10 | 734 | October 9, 2024 |