Training wav2vac2 requires a lot of compute power

I am trying to Fine tune wav2vac2 model for my national language, and I have 15k data points but while trainings my system could only train on 1k data points, and if I increase the datapoints my system either crashes or I get CUDA out of memory. So I’m wondering if there are any other options.

Second, can I first train on 1k data, save the model locally, and then load the model and train on another 1k new data to improve my model? Will it actually work?