Continuous training on Fine-tuned Model

Thank you for your reply.
I tried this as well. My old dataset and the new dataset has different texts.
This method makes the model heavily lean towards the new text provided. This results in the text which were transcribed properly in the first training create a transcript more leaning towards the newer texts even resulting in correctly transcribed text also becomes wrongly transcribed due to this.