Hi @serdarcaglar , thank you for the report! Could you provide a reproducible command / code to make it easier for us to track the issue?
The ONNX export through transformers.onnx
will soon rely fully on Optimum Exporters (package for all things export).
Currently, using the stable optimum==1.5.1
, the export command python -m optimum.exporters.onnx --model openai/whisper-tiny whisper_tiny_onnx_vanilla
works well.
In the next release of Optimum (that you can hopefully expect sometime next week), the exporter will support exporting the encoder and decoder as two separate files, making it easier to use with ONNX Runtime:
python -m optimum.exporters.onnx --model openai/whisper-tiny --for-ort whisper_tiny_onnx
This will allow you to export your model, and load it directly from a local folder into ORTModelForSpeechSeq2Seq
.
Compare: