Hi,
I would like to fine-tune a model( mBart50-many-to-many ) for English to Tamil translation. I would like to know what should be the ideal dataset size for this fine-tuning activity.
I have put this question here rather than in the model specific forum space because I would like to know if there is any standard across models too.