Translation architectures fine-tunable on a new language

In this tutorial, several Huggingface model architectures for machine translation are listed (BART, T5, mT5, Fairseq, etc.). In order to use any of them, I should initialize it with from_pretrained method which means that, without fine-tuning, I can access only those languages it is pretrained on. However, I would like also to use some of these architectures for other languages that are out of the model’s scope.
Which of them are fine-tunable on new languages?