Continual training on LLM like Whisper


I fine-tuned the medium model of Whisper with my own data set (a set of transcribed audio files for a specific case). The results are good but may be better due to the small size of my own data set. I will be able to obtain new transcribed audio recordings in the future. So my questions are, is it possible to make continuous learning with Whisper? Is there a possiblity of catastrophic forgetting and how to avoid it ?

thanks in advance.