Tokenizer effect on the fine-tuning

Hi everyone, I’m on a project to fine-tune multiple 7B and less text2text generation models on arabic and i was wondering about the effect of the original tokenizer on the fine tuning process or what if i use a tokenizer different from the model’s original one! Let’s say BLOOM tokenizer, Will that hurts the model’s performance? So if anyone have seen a paper to discuss this or something similar please drop it here it will be really beneficial or simply comment your thoughts :pray:t2: