Help in Finetuning a DistilBert uncased Q/A model

Hi, I m trying to train a custom Q/A model by fine tuning the pretrained ditilbert-bert-uncased model. When I try to train a model with a single question for a context, my model performs very good and produces high accuracy. But if I train my model with multiple questions on the same context, my model doesn’t learn (the loss doesn’t decrease after certain number of epochs). I tried moving the learning rate up and down, but that didn’t help.

Kindly advise if I m missing something or what other methods I can try.