Failed to create CUDAExecutionProvider

Hello Everyone,
I have been trying to Optimize the model (Text Classification, BERT) for GPU using ORTOptimizer but first, I got this warning message:

[W:onnxruntime:Default, onnxruntime_pybind_state.cc:578 CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Please reference https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met. 2022-11-22 13:46:00.800872459 [W:onnxruntime:, inference_session.cc:1458 Initialize] Serializing optimized model with Graph Optimization level greater than ORT_ENABLE_EXTENDED and the NchwcTransformer enabled. The generated model may contain hardware-specific optimizations, and should only be used in the same environment the model was optimized in.

Then I faced an error:
assert "CUDAExecutionProvider" in session.get_providers() # Make sure there is GPU 107 assert os.path.exists(optimized_model_path) and os.path.isfile(optimized_model_path) 108 logger.debug("Save optimized model by onnxruntime to {}".format(optimized_model_path))
Environment check:
get_available_providers() output: [‘TensorrtExecutionProvider’, ‘CUDAExecutionProvider’, ‘CPUExecutionProvider’]

Hello @serdarcaglar , ONNX Runtime + GPU can be tricky. Although get_available_providers() shows CUDAExecutionProvider available, ONNX Runtime can fail to find CUDA dependencies when initializing the model.

I would recommend you to refer to Accelerated inference on NVIDIA GPUs , especially the section “Checking the installation is successful”, to see if your install is good. I would recommend to follow Nvidia documentation to install CUDA and cuDNN, it’s well written.

2 Likes

Thanks. Problem was solved.

Better error message will be present in the next release, to check the CUDA is well installed: Clearer error messages when initilizing the requested ONNX Runtime execution provider fails by fxmarty · Pull Request #514 · huggingface/optimum · GitHub

1 Like

solved my problem.