Fine tuning LoRa merge

After finished fine tuning with model using LoRA, i need to merge adapter and model. If I use a 4bit bitsandbytes configuration I have to merge the adapter with the same model 4bit or with the full model ?