Onnx Errors pipeline_name ='question-answering'

from transformers.convert_graph_to_onnx import convert

convert(framework=‘pt’,pipeline_name =‘question-answering’, model=‘roberta-base-squad2’,output=my_outputpath,opset=11)

ONNX opset version set to: 11
Loading pipeline (model: roberta-base-squad2, tokenizer: roberta-base-squad2)
Using framework PyTorch: 1.10.0+cu111
Found input input_ids with shape: {0: ‘batch’, 1: ‘sequence’}
Found input attention_mask with shape: {0: ‘batch’, 1: ‘sequence’}
Found output output_0 with shape: {0: ‘batch’, 1: ‘sequence’}
Found output output_1 with shape: {0: ‘batch’, 1: ‘sequence’}
Ensuring inputs are in correct order
token_type_ids is not present in the generated input list.
Generated inputs order: [‘input_ids’, ‘attention_mask’]
/usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py:90: UserWarning: ‘enable_onnx_checker’ is deprecated and ignored. It will be removed in the next PyTorch release. To proceed despite ONNX checker failures, catch torch.onnx.ONNXCheckerError.
warnings.warn("‘enable_onnx_checker’ is deprecated and ignored. It will be removed in "
/usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py:103: UserWarning: use_external_data_format' is deprecated and ignored. Will be removed in next PyTorch release. The code will work as it is False if models are not larger than 2GB, Otherwise set to False because of size limits imposed by Protocol Buffers. warnings.warn("use_external_data_format’ is deprecated and ignored. Will be removed in next "

Successfully converted.
however when I tried to run
sessions = InferenceSession(‘onnx/roberta-base-squad2.onnx’,options)

sessions.run([‘output_0’,‘output_1’],[context,ques])

186             output_names = [output.name for output in self._outputs_meta]
187         try:

→ 188 return self._sess.run(output_names, input_feed, run_options)
189 except C.EPFail as err:
190 if self._enable_fallback:

TypeError: run(): incompatible function arguments. The following argument types are supported:
1. (self: onnxruntime.capi.onnxruntime_pybind11_state.InferenceSession, arg0: List[str], arg1: Dict[str, object], arg2: onnxruntime.capi.onnxruntime_pybind11_state.RunOptions) → List[object]

Invoked with: <onnxruntime.capi.onnxruntime_pybind11_state.InferenceSession object at 0x7f2e80bc4130>, [‘output_0’, ‘output_1’], {‘input_ids’: tensor([[ 0, 2264, 16, 5, 645, 346, 116, 2]]), ‘attention_mask’: tensor([[1, 1, 1, 1, 1, 1, 1, 1]])}, None

@NhatPham
did you manage to solve this ? i am facing the same problem as stated below

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
/home/shamik/Repos/Codes/QA_ONNX.ipynb Cell 18' in <module>
----> 1 onnx_model.run(input_feed=tokenizer(question,text), output_names=None)

File ~/anaconda3/envs/onnx/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:192, in Session.run(self, output_names, input_feed, run_options)
    190     output_names = [output.name for output in self._outputs_meta]
    191 try:
--> 192     return self._sess.run(output_names, input_feed, run_options)
    193 except C.EPFail as err:
    194     if self._enable_fallback:

TypeError: run(): incompatible function arguments. The following argument types are supported:
    1. (self: onnxruntime.capi.onnxruntime_pybind11_state.InferenceSession, arg0: List[str], arg1: Dict[str, object], arg2: onnxruntime.capi.onnxruntime_pybind11_state.RunOptions) -> List[object]

Invoked with: <onnxruntime.capi.onnxruntime_pybind11_state.InferenceSession object at 0x7f04a8275c30>, ['output_0', 'output_1'], {'input_ids': [101, 2040, 2001, 3958, 27227, 1029, 102, 3958, 27227, 2001, 1037, 3835, 13997, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}, None

unfortunately, I decide to move on this one. :sweat_smile:

Had some luck and managed to solve it.

The input_feed arg while running the session for inferencing requires a dictionary object with numpy arrays and it was failing in my case because i was passing in a BatchEncoding(transformers.tokenization_utils_base.BatchEncoding) object type by doing the following:
onnx_model.run(input_feed=tokenizer(question, text), output_names=None)

The solution is the following

inputs = tokenizer(question, text, add_special_tokens=True, return_tensors='np')
# outputs = onnx_model.run(input_feed=dict(tokenizer(question, text, add_special_tokens=True, return_tensors='np')), output_names=None)
outputs = onnx_model.run(input_feed=dict(inputs), output_names=None)
# start_logits = onnx_model.get_outputs()[0].name
# end_logits = onnx_model.get_outputs()[1].name
# outputs = onnx_model.run(input_feed=dict(inputs), output_names=[start_logits,end_logits])
start_scores = outputs[0]
end_scores = outputs[1]
input_ids = inputs["input_ids"].tolist()[0]
ans_start = np.argmax(start_scores)
ans_end = np.argmax(end_scores)+1
answer_tokens = tokenizer.decode(input_ids[ans_start:ans_end])
answer_tokens

haha awesounds good, I would check later Shamik and have a good day :slight_smile:

it works like charm thank you again Shamik