Continuous training on Fine-tuned Model

:rocket: Feature request

How can I continue training on a Fine-tuned Model?
I have a fine tuned model from OpenSLR data. And I want to continue training on an model as I continue to gain transcribed audio data over time. Can I do like making the fine-tuned model as a checkpoint?

Motivation

I am aiming to make a model for Nepali Language. I have a way to collect data over time and it is continuous. So, I want to find a way I can train the model continuously as I gain data over time

In PyTorch, you can further train a model just by putting it in training mode (model.train()), and then train as usual. This will update the parameters of the model.

Thank you for your reply.
I tried this as well. My old dataset and the new dataset has different texts.
This method makes the model heavily lean towards the new text provided. This results in the text which were transcribed properly in the first training create a transcript more leaning towards the newer texts even resulting in correctly transcribed text also becomes wrongly transcribed due to this.

Perhaps you can try freezing some layers, and only fine-tune a specific layer.

In PyTorch, this can be done as follows:

for name, param in model.named_parameters():
     if name == ...:
        param.requires_grad == False

Will try this. Thank you!

This should be param.requires_grad = False. But I don’t see how freezing would help OP? Can you clarify that?

@noskid The best approach is to simply always finetune on the WHOLE dataset (old+new, preferably shuffled) so that the model is not biased on any specific subset.

I also thought about this as an alternative.
But, since my datasets are very large. The training time would just cumulate to a very big amount of time. So, I wanted to know if any other methods were available.