What happens if the fine-tuning is done twice?

Apologies in advance if the question is silly, I’m trying to learn about huggingface and nlp in general.

My doubt is the following: let’s suppose that I want to do text-generation and I will work with the gtp2 pre-trained model. First of all I do finetuning with an astronomy dataset. I save the model as gtp2-astronomy. Then, I finetuned gtp2-astronomy with a physics dataset and saved it as a final-model.

My question is: will this final-model be good for text generation of astronomy and also for physics? Or by fine-tuning the second time, do I “eliminate” the ability of the model with astronomy subjects?

I ask this question because, as I understand it, when finetuning you are basically working with the last layer of the network, so, I don’t know if fine-tuning the second time will reset the last layer, which the first time learned about astronomy.