How to finetune Whisper with language which is not supported in WhisperTokenizer

According to fine-Tune Whisper For Multilingual ASR with :hugs: Transformers, I can fine-tine the Whisper model with languages supported by WhisperTokenizer.

However, if we need to make it support new language (which is not supported by the tokenizer), how could I do that? Could you please point me to the document or example which I could follow?

I’m also searching for the same answer.

Yes, i’m interested in this too. Particularly for very low resource languages like Wolof - do you need to train a BPE tokenizer given Wolof transcriptions, then pass the vocab.json and the merge file to the WhisperTokenizer?