How to fine-tune mT5 model for QA task?

Hello everyone!

I want to fine-tune the mT5 model for the QA task (mT5-small).
I have downloaded the data in my language, and I now have:
train_questions, train_contexts, train_answers.

I do not know how to use the tokenizer (and which one I should use), and how to train the model on my dataset.
I tried Google and GPT-4 with no luck.

My first attempt was to build a new class like:

class mT5(nn.Module):
  def __init__(self):
    super(mT5, self).__init__()
    self.mT5 = MT5Model.from_pretrained("google/mt5-small")
    self.hidden_size = self.mT5.config.hidden_size
  def forward():

But I was stuck here too.

I will very much appreciate a good explanation of this!

Thanks a lot!