I converted huggignface whisper model to onnx with optimum-cli
:
optimum-cli export onnx --model openai/whisper-small.en whispersmallen
I got 4 onnx files:
decoder_model_merged.onnx
decoder_model.onnx
decoder_with_past_model.onnx
encoder_model.onnx
Now I want to write code which loads whisper (as onnx
) and run it on 1.wav
file.
- How to do it ?
- When using hf whisper model, I just run one model (and not 2 sperates models: encoder/decdoer)