Optimum has recently added support for encoder-decoder type models. I have succesfully converted some of them (such as mt5) to ONNX using optimum. However, I have a encoder-decoder type model (Encoder Decoder Models) built using 2x Roberta type models, that I want to convert to ONNX. Whenever I try to do so, I get an “unsupported” message, giving me a list of the supported models (using this tutorial Convert Transformers to ONNX with Hugging Face Optimum which seems to be in line with the docs).
I wanted to make sure that, since encoder-decoder models are a bit of a special case (Roberta is actually supported, but the combination of two of them in the form of a encoder-decoder model does not seem to be) there is actually no way for me to optimize my model’s inference in production using Optimum to use a ONNX version of the same.
Thanks for your help,