Finetune a pretrained huggingface translation model on a new language pair

Is it possible to fine-tune any pretrained huggingface BERT-based multilingual translation model (e.g., NLLB) on a new language pair, with one language already seen (let it be English) and the other not seen in a pretrained model?
If yes, is all the procedure the same, i.e., create a dataset and implement a training/fine-tuning script?