Hosted Inference API with SpeechBrain returns arror

I’m using the SpeechBrain toolkit to fine-tune XLSR. Everything worked well but now when I’m trying to use the Inference API, I’m getting these error :

Can't load feature extractor for 'facebook/wav2vec2-large-xlsr-53'. Make sure that: - 'facebook/wav2vec2-large-xlsr-53' is a correct model identifier listed on 'https://huggingface.co/models' - or 'facebook/wav2vec2-large-xlsr-53' is the correct path to a directory containing a preprocessor_config.json file

I observe a similar result with the SpeechBrain models on HuggingFace that use XLSR. So I guess that the problem is not only with my implementation but probably related to the Hosted Inference API.

I use to present the results of my works using these graphical interface, so any thoughts to solve this issue will be very welcome!

Hello,

The task itself (sort of masked speech modelling) is not defined for the inference API. The widget also doesn’t work at the moment. If you fine-tuned it on a downstream task that exists, the inference API should work on that model. Is there any link to your model?

Hello,

Thank you for your fast response. Here is one of my models : https://huggingface.co/nairaxo/dvoice-swahili. It was inspired by this SpeechBrain recipe.

@nairaxo is there any problem with your model on inference API?

Yeah. This is what I get when I try to infer the model using the inference API. It works usually, but since 3/4 days, it not working. I don’t know if it’s related to the “facebook/…” models that I fine-tune or to the inference API widget as you say.
This what I got instead of the speech transcriptions…

1 Like

Pinging @anton-l here :slightly_smiling_face:

We had an issue with the Inference API, sorry for the bug.

It should be up again!

1 Like

Thank you all. It works now! :+1:t6: