Cant load tokenizer using from_pretrained, `use_auth_token=True` error when token is being used

Model was working fine for a few weeks until yesterday. Calling the inference API from docker container results in the same error as when using the inference api model card. Error below:

Can't load tokenizer using from_pretrained, please update its configuration: xxxx/wav2vec_xxxxxxxx is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' If this is a private repository, make sure to pass a token having permission to this repo with use_auth_tokenor log in withhuggingface-cli loginand passuse_auth_token=True.

We are passing the token in the API call, I have tried both an org token and a user token to no avail. Code I am using is the below:

import requests

API_URL = "https://api-inference.huggingface.co/models/xxx/wav2vec_xxxxxx"
headers = {"Authorization": "Bearer xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"}

def query(filename):
    with open(filename, "rb") as f:
        data = f.read()
    response = requests.post(API_URL, headers=headers, data=data)
    return response.json()

someone pls hulp

3 Likes

Okay magically working again. Though a member on our team did add an extra tokeniser.json file that was used by other models that were using the same base model we were using. So maybe that helps.

Though I suspect it was a huggingface bug because number of downloads on the model card was 91 when the model was broken, and now it is down to 79, around the number of downloads before it started working.

I’m still facing this issue and I too think it’s a huggingface bug.
I’m trying to access private models through a space, which was working until yesterday (and no changes were made). I have set my token, from which I have access to the model, in the space secrets. Tried both having write permission and read permission, no changes. I’ve also tried integrating it directly in the code but had no luck, still the traceback error is the same:

huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-64b00069-…)

[…]

OSError: / is not a local folder and is not a valid model identifier listed on ‘Models - Hugging Face’ If this is a private repository, make sure to pass a token having permission to this repo with use_auth_token or log in with huggingface-cli login and pass use_auth_token=True.

In the documentation I’ve also seen that the parameter to be used is “token” and not “use_auth_token”. When setting both to my auth token, the traceback suggests to use only “token”. I’ve tried that too, but had no luck.

If someone has any idea or is facing the same please let me know!

Yup, ours is broken again

Facing the same bug. Can’t load tokenizer using from_pretrained, please update its configuration: MealMate/2M_Classifier is not a local folder and is not a valid model identifier listed on ‘Models - Hugging Face’ If this is a private repository, make sure to pass a token having permission to this repo with use_auth_token or log in with huggingface-cli login and pass use_auth_token=True.

1 Like

I had the same issue, finally it works today.

A member of our team contacted hugging face about it, seemed to be an issue with tokenizers on their side and the issue has now been resolved.

We subsequently moved our API to a paid service and that worked (in case someone comes accross this in future)

1 Like

Having the same issue since 27/07, still not working.