How to use the question-answering pipeline in batch mode?

I have been playing around with the qa pipeline:

pipeline('question-answering', model='deepset/xlm-roberta-large-squad2')

and have a list of dicts with the qa input format:

    {'question': 'what is the meaning of life?', 'context': 'the meaning of life is to ask good questions'},
    {'question': 'what is the capital of france?', 'context': 'the capital of france is paris!'}

However, I cant see any differences in inference speed by passing the whole batch of qa pairs, versus, passing them one by one.

Is this not running in batch mode?

Is it possible to run this in batch mode if I revert back to just using XLMRobertaForQuestionAnswering instead?