How to send a list of questions and contexts to the QA model when using the Inference API?

According to the Detailed Parameters, it should be possible to send a list of inputs to the QA models; so the return value is either a dict or a list of dicts if you sent a list of inputs.

But, when I query QA model with a list of strings, the API is running with
{‘error’: ['str type expected: question in parameters', 'str type expected: context in parameters']}

Can anyone provide a working example.
Thanks

Of course, by streaming I can send a bunch of questions and contexts and then receive the answers, But for sage of performance, I’m looking for a way to run the QA model only once with a batch of data.