After finished fine tuning with model using LoRA, i need to merge adapter and model. If I use a 4bit bitsandbytes configuration I have to merge the adapter with the same model 4bit or with the full model ?
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| How to merge LoRa weights with base model? | 0 | 1316 | May 3, 2023 | |
| How to merge multiple LoRA back to base model? | 0 | 677 | December 14, 2023 | |
| Combine LORA with full finetuning | 0 | 411 | September 4, 2023 | |
| LLM2VEC QLora Quantization after merge_and_upload() | 0 | 151 | July 25, 2024 | |
| Further finetuning a LoRA finetuned CausalLM Model | 17 | 11198 | July 7, 2024 |