How to export facebook/mbart-large-50-many-to-many-mmt to ONNX format?

Hi all,
This is the code I used to successfully export to ONNX format

import torch
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast

model_path = "./mbart_large_50_model"
model = MBartForConditionalGeneration.from_pretrained(model_path)
tokenizer = MBart50TokenizerFast.from_pretrained(model_path)

input_text = "This is just simple text"
inputs = tokenizer(input_text, return_tensors="pt")

onnx_path = "./mbart_large_50.onnx"

# Export to ONNX format:
torch.onnx.export(
    model,                                 # PyTorch model
    (inputs["input_ids"],),               
    onnx_path,                            # the path for the  resulting ONNX file
    input_names=["input_ids"],            # input tensor names
    output_names=["logits"],              # output tensor names
    dynamic_axes={"input_ids": {0: "batch", 1: "sequence"}},
    opset_version=14                      
)

So I got the mbart_large_50.onnx and bunch of other files
Also, I successfully verified resulting ONNX file, with following code:

import onnx
try:
    onnx.checker.check_model("d:/Install/TensorFlow/models/MBART_Base/ONNX/mbart_large_50.onnx")
except onnx.checker.ValidationError as e:
    print(f"The model is invalid: {e}")
else:
    print("The model is valid!")

So far, so good…
-Now, this is how I am using this ONNX format,for the job of translating between two languages:

from transformers import MBart50TokenizerFast
import numpy as np

tokenizer_path = "d:/Install/TensorFlow/models/MBART_Base/mbart_large_50_model/"
# specify source and target language:
tokenizer = MBart50TokenizerFast.from_pretrained(tokenizer_path, src_lang="en_XX", tgt_lang="hr_HR")

text = "This is just simple text."
inputs = tokenizer(text, return_tensors="np")
input_ids = np.array(inputs["input_ids"], dtype=np.int64)

import onnxruntime as ort
onnx_session = ort.InferenceSession("d:/Install/TensorFlow/models/MBART_Base/ONNX/mbart_large_50.onnx")
onnx_inputs = {"input_ids": input_ids}
onnx_outputs = onnx_session.run(["logits"], onnx_inputs)
logits = onnx_outputs[0]
probabilities = np.exp(logits) / np.sum(np.exp(logits), axis=-1, keepdims=True)

generated_tokens = np.argmax(logits, axis=-1)
translated_text = tokenizer.decode(generated_tokens[0], skip_special_tokens=True)
print("Translated tekst:", translated_text)

But, for the input text: "This is just simple text.", I am getting following output:
"is just simple text"
as you can see, the text is not translated, it is the same as the input, except the first word is missing. In fact, the same result as I am getting with TorchScript version of model

Wondering, what am I doing wrong?

The issue you’re encountering is likely caused by missing or incorrect handling of the decoder input IDs required for autoregressive sequence generation in translation tasks. In models like MBart, translation involves both encoder and decoder components. Simply running the ONNX session with only input_ids (encoder input) does not generate a complete translation because it bypasses autoregressive generation in the decoder.

from transformers import MBart50TokenizerFast
import numpy as np
import onnxruntime as ort

# Load the tokenizer
tokenizer_path = "d:/Install/TensorFlow/models/MBART_Base/mbart_large_50_model/"
tokenizer = MBart50TokenizerFast.from_pretrained(tokenizer_path, src_lang="en_XX", tgt_lang="hr_HR")

# Input text to translate
text = "This is just simple text."
inputs = tokenizer(text, return_tensors="np")
input_ids = np.array(inputs["input_ids"], dtype=np.int64)

# Load the ONNX model
onnx_model_path = "d:/Install/TensorFlow/models/MBART_Base/ONNX/mbart_large_50.onnx"
onnx_session = ort.InferenceSession(onnx_model_path)

# Autoregressive decoding
decoder_input_ids = np.array([[tokenizer.convert_tokens_to_ids(tokenizer.eos_token)]]).astype(np.int64)  # Start with <s>
max_length = 50  # Set a reasonable maximum output length
generated_ids = []

for _ in range(max_length):
    # Prepare inputs for the ONNX session
    onnx_inputs = {
        "input_ids": input_ids,  # Encoder input IDs (source text)
        "decoder_input_ids": decoder_input_ids  # Decoder input IDs (generated so far)
    }

    # Run inference
    onnx_outputs = onnx_session.run(["logits"], onnx_inputs)
    logits = onnx_outputs[0]

    # Get next token
    next_token_id = np.argmax(logits[0, -1, :])  # Take the last time step's output
    generated_ids.append(next_token_id)

    # Check for end of sentence token
    if next_token_id == tokenizer.eos_token_id:
        break

    # Update decoder input IDs for next iteration
    decoder_input_ids = np.concatenate([decoder_input_ids, [[next_token_id]]], axis=1)

# Decode the generated tokens
translated_text = tokenizer.decode(generated_ids, skip_special_tokens=True)
print("Translated text:", translated_text)

thank @Alanturner2 ,
at the line of code you provided:

onnx_outputs = onnx_session.run(["logits"], onnx_inputs)
I am getting:

onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError]
: 2 : INVALID_ARGUMENT : Invalid input name: decoder_input_ids

One thing here might point the way to a solution. When I use
MBartForConditionalGeneration model over Pytorch version of model, as follows:

model = MBartForConditionalGeneration.from_pretrained(model_path)
generated_tokens = model.generate(**inputs, forced_bos_token_id=tokenizer.lang_code_to_id["hr_HR"])

the main thing is to pass forced_bos_token_id parameter to the generate() metod.
This is how the target language is marked , according to the model card
I wonder if, in the case of using ONNX or TorchScript version, maybe I should implement the same thing as in model.generate()

Honestly, if that’s the case, it seems like too much work to use those versions of the model…huh?

Maybe I’m getting this error:

INVALID_ARGUMENT : Invalid input name: decoder_input_ids

because of the way I generated the ONNX? Take a look at my export procedure:

torch.onnx.export(
    model,                                 # PyTorch model
    (inputs["input_ids"],),               
    onnx_path,                            # the path for the  resulting ONNX file
    input_names=["input_ids"],            # input tensor names
    output_names=["logits"],              # output tensor names
    dynamic_axes={"input_ids": {0: "batch", 1: "sequence"}},
    opset_version=14                      
)
1 Like