Wav2vec2-large-xlsr-53 for non-listed low resource language

Hi,

Can we fine-tuned the model for low resource language not inside the 53 languages listed?
Does it have to be inside the same language family of the 53 listed?

Regards
Becks

Hello,

Yes you can fine-tune the model for low resource language outside the ones which the model was pretrained on. On the original paper of XLSR-53 ([2006.13979] Unsupervised Cross-lingual Representation Learning for Speech Recognition) you can see on table 3 and table 4 that they finetuned on out-of-pretraining languages.
In addition, during my school project, we finetuned XLSR-53 ont the Czech and Ukrainian language and got some good results , (feel free to check my github - GitHub - omarsou/wav2vec_xlsr_cv_exp: Experiments on out of training languages (from common voice https://commonvoice.mozilla.org/) using Wav2Vec ).

Best,
Omar