How well does a language model perform when fine-tuned on a dialect of its trained language?

I am currently working on fine-tuning an Arabic language model to adapt it to the Moroccan dialect using the LoRA (Low-Rank Adaptation) technique with a high rank. This is based on intuition, and I’m uncertain about its effectiveness due to the lack of high-quality data; my dataset consists mainly of YouTube comments and replies. I’m seeking advice on whether this approach is worthwhile or if I should consider an alternative strategy.