Parallelizing inputs to ONNX model

I am trying to run inference using ONNX Model and I want to give multiple inputs to the model and expect it to generate a list of the outputs, is it possible?

Example :

inputs = tokenizer([“Using RoBERTa with ONNX Runtime!”, “Hello”, ‘Whats up’]), return_tensors=‘np’)
modified_inps = {‘input_ids’ : [np.array(inp, dtype=np.int64) for inp in inputs[‘input_ids’]],
‘attention_mask’ : [np.array(mask, dtype=np.int64) for mask in inputs[‘attention_mask’]]}
outputs = session.run(output_names=[“logits”], input_feed=modified_inps)

Expected Output:

[[<logit_0>, <logit_1>],
[<logit_0>, <logit_1>],
[<logit_0>, <logit_1>]]