Want to understand whether there is a way to upgrade the the base models version which has been fine-tuned with company specific data for an on-premise implementation.
Since base models are also trained independently and new versions released, what would be the recommendation to upgrade the base model which is fine-tuned.
Will we need to take the base-model and reapply all the trainings done as part of fine-tuning ?
1 Like
For example, if the model was trained using PEFT from the beginning, the computational cost of retraining would be low, but I don’t know if there is a way to make the difference between the already-trained normal model and the base model into LoRA.