Can I train a model to a different downstream task?

I am a very beginner in Transformers and HuggingFace, I searched the forum and did not find something relevant. If I understood correctly, Transfer Learning should allow us to use a specific model, to new downstream tasks.

I will use a more specific example, say for example I load bert-base-uncased. I see that the model can be trained on eg. Text Classification, Question answering, etc. If I wanted to run an unlisted task, say for example NER, can I somehow train the same model, or shall I tun myself to a different model that already supports this task?

If the texts are very field specific, like science or medicine, can then bert be used, or shall I turn myself to more specific texts/models?

Those might be a bit detrimental to ask, but please share any information that could guide me.

PS: I have the impression that this post is asking a similar question.

If I get it right, it is also a matter of the so called fine-tuning, at least if a model is trained for Token Classification. NER is also a form of token classification, thus the model has to be adapted.

I found relevant resources here for token classification and here for fine-tuning.