Inference API does not work for my model with custom tokenizer

I have finetuned a seq2seq model and I’ve used a different tokenizer from the model’s original one.

Here’s a link for the private model: popaqy/pegasus-base-qag-bg-finetuned-spelling6-bg

When I try to use the Inference API, it loads for like 3 minutes and then timeouts. I suspect that the problem is within the different tokenizer I have used, but at the same time it does not make sense, because I have explicitly included it in the trainer arguments.

Moreover, on the image below, “This model could not be loaded by the Inference API” is shown

@popaqy did you find a solution?
I’m facing the same issue.