Facebook/wav2vec2-large-xlsr-53 on the hub: tokenizer issue

When trying to run facebook/wav2vec2-large-xlsr-53 · Hugging Face in the browser with an audio snippet, I get an error when loading the model:

Can’t load tokenizer using from_pretrained, please update its configuration: Can’t load tokenizer for ‘facebook/wav2vec2-large-xlsr-53’. If you were trying to load it from ‘Models - Hugging Face’, make sure you don’t have a local directory with the same name. Otherwise, make sure ‘facebook/wav2vec2-large-xlsr-53’ is the correct path to a directory containing all relevant files for a Wav2Vec2CTCTokenizer tokenizer.

Tagging @patrickvonplaten who wrote a blog post on the model.

1 Like

Maybe @patrickvonplaten or @Narsil knows

1 Like

I could be wrong, but this model does not include a tokenizer, as it’s the pretraining base, so it cannot be used directly to do inference.

The real models are finetuned per language I think : facebook/wav2vec2-large-xlsr-53-portuguese · Hugging Face

1 Like

I see. Maybe best to disable the widget on that page then? And a note explaining that users should use a finetuned model instead may also be useful!

You’re totally right @BramVanroy - disabled the widget now :slight_smile:

1 Like