Some nodes were not assigned to the preferred execution providers

I’m trying to export the torch model into ONNX format using optimum and this warning came out:
2023-11-09 23:05:31.461232604 [W:onnxruntime:, session_state.cc:1162 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2023-11-09 23:05:31.461260995 [W:onnxruntime:, session_state.cc:1164 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.

Code to reproduce:

from optimum.onnxruntime import ORTModelForFeatureExtraction
from transformers import AutoTokenizer
from pathlib import Path


model_id="intfloat/e5-large-v2"
onnx_path = Path("onnx")

# load vanilla transformers and convert to onnx
onnx_model = ORTModelForFeatureExtraction.from_pretrained(model_id, export=True, provider="CUDAExecutionProvider")
tokenizer = AutoTokenizer.from_pretrained(model_id)

Hi @favian888! Setting ONNXRuntime’s log level to verbose as follows helps to get more information:

import onnxruntime as ort
ort.set_default_logger_severity(0)

Running your script, I see the following:

ORT optimization- Force fallback to CPU execution for node: /0/auto_model/Gather_1 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
...
Node(s) placed on [CPUExecutionProvider]. Number of nodes: 18

So basically, ONNXRuntime considers that placing these nodes on CPU will give the fastest configuration.