Fine-tuning an NLLB model for a new language

Hi, I’m new to huggingface, transformers and NLP in general. I found this article how to fine tune an NLLB model for a new language, which I followed and actually got some decent results.
However, the post used transformers version 4.33 where adding a new language token to the NLLB tokenizer was a bit hacky. Since then this PR was implemented which (I think) allows adding a new language to the tokenizer simply by doing this:

tokenizer = NllbTokenizer.from_pretrained('facebook/nllb-200-distilled-600M', 
        additional_special_tokens = FAIRSEQ_LANGUAGE_CODES + [new_language_code])

This worked fine so far and I have successfully fine-tuned the model again with transformers 4.38 on a parallel dataset of Northern Frisian and German sentences. Everything worked just as with the previous version, except now translation to Northern Frisian only works from German. When trying to translate e.g. an English sentence to Northern Frisian it just gets translated to German instead. In the old version translating English to Frisian worked perfectly fine.

I also noticed that the <mask> token isn’t the last one in the tokenizer, even though the code in the NllbTokenizer really looks like it should be. In the old version, part of adding the new language tag was to also move the mask token into the last spot again.

So the question is, am I missing something? Do I need to do more to the tokenizer (or the model) to correctly add the new language tag? Or is there something wrong with the NllbTokenizer?