Openai/whisper-large-v3 ONNX validation

Sorry, it copied as ‘success’ checkmark but in fact, it was a cross , here is a screenshot:

And the code: (I am trying with the transformers.optimum main_export way)

from transformers import WhisperConfig
from optimum.exporters.onnx import main_export
from optimum.exporters.onnx.model_configs import WhisperOnnxConfig

model_id = "openai/whisper-large-v3"

print("Exporting model as ONNX")

config = WhisperConfig.from_pretrained(model_id)
onnx_config = WhisperOnnxConfig(config, task="automatic-speech-recognition")

encoder_config = onnx_config.with_behavior("encoder")
decoder_config = onnx_config.with_behavior("decoder")

custom_onnx_configs={
    "encoder_model": encoder_config,
    "decoder_model": decoder_config
}

main_export(
    model_id,
    output="onnx/out",
    task="automatic-speech-recognition",
    custom_onnx_configs=custom_onnx_configs
)

I tried to investigate/debug trace the library code but it gets beyond my ability to reason, especially because I’m new to Optimum, ONNX runtime and have basic skills in Python.

Sometimes, I’ve observed ‘atol’ diffs up to 20x the threshold of 0.001