Is there any way to keep the basic capability of the base model after the fine-tuning? we’re using the PEFT (Lora adapters) method for this fine-tuning. The dataset has around 500 chats. However, after the fine-tuning process, when I tried to include new instructions along with the original ones during testing, the model ignored the new instructions and stuck to the original ones it learned during the improvement process. I want that model should understand the new instructions that I am giving during the inference.
Is there a way to preserve the fundamental abilities of the base model after the fine-tuning process?