Trocr after onnx quantisation conversion using optimum-cli , im getting this error

first i tried to convert my model to onnx using this…
!optimum-cli export onnx -m best_M/ --task vision2seq-lm best_M_onnx/ --atol 1e-3

after its works perfectly when i tested with this converted onnxmodel, then i try to quantise this model using this…
!optimum-cli onnxruntime quantize --onnx_model best_M_onnx/ --avx512 -o best_M_quant/

this is succesfully quantise the model interms of size, but when im testing this error im getting…

Traceback (most recent call last):
  File "onnx_quant.py", line 213, in <module>
    test_ort()
  File "onnx_quant.py", line 159, in test_ort
    model = ORTModelForVision2Seq()
  File "onnx_quant.py", line 110, in __init__
    self.encoder = ORTEncoder()
  File "onnx_quant.py", line 38, in __init__
    self.session = onnxrt.InferenceSession(onnx_encoder, providers=["CPUExecutionProvider"]
  File "/home/sai/anaconda3/envs/onnx/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/home/sai/anaconda3/envs/onnx/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 463, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for ConvInteger(10) node with name '/embeddings/patch_embeddings/projection/Conv_quant'```

plese help me and thanks in advance

Hi @sainithish! Can you provide the script you used to test the quantized model please?