Hosted inference API: Pipeline cannot infer suitable model classes

When I try to use the inference pipeline widget for my public project, I got this error.

But the pipeline works locally


does your model work when you run with transformers?

from transformers.pipelines import pipeline
pipe = pipeline(model="KELONMYOSA/wav2vec2-xls-r-300m-emotion-ru", trust_remote_code=True)
result = pipe("speech.wav")

Could you try update the model card and add the correct tags?

  - audio
  - audio-classification

Hi, @radames

  • Yes, everything works when I run it locally using transformers as specified in readme (pipeline and automodel)
  • I have updated the model card

This issue has been popping up on a bunch of different models and endpoints actually. I think there is a deeper issue at play.