Error 401 Client Error: Unauthorized for url

This is a gated model, you probably need a token to download if via the hub library, since your token is associated to your account and the agreed gated access

1 Like

login with the token

from huggingface_hub import notebook_login

still have the same issue

how are you trying to load the model?

Traceback (most recent call last):
File “/Radiata/venv/lib/python3.10/site-packages/huggingface_hub/utils/”, line 259, in hf_raise_for_status
File “/Radiata/venv/lib/python3.10/site-packages/requests/”, line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url:


Repository Not Found for url:
Please make sure you specified the correct repo_id and repo_type.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid username or password.

Would you please elaborate on this? How did you find out that the authentication key was not being used? What steps did you take to make it use your authentication key? I have an authentication key but I still get the same 401 error.

1 Like

I am trying to access this model and running into ‘401 Client Error: Repository Not Found for url’. I have completed the three steps outlined (2 requiring accepting user agreement after logging in and the third requiring to create an access token. I have tried accessing the model via the API on as well as using the python code snippet for the inference api in my local notebook. Can anyone help?

Why don’t you answer the question?

If someone is using a webui and got this error, set the environment variables : HF_USER, HF_PASS, and also HF_TOKEN (create new token under settings menu in your Hugging Face account).

!huggingface-cli login

or using an environment variable

!huggingface-cli login --token $

I use this conquer this promble ,hope to help you

This issue still exists. I have been suffering from it since the start I signed up hf. It appears only on the webui, not on every model pages but on some. Meanwhile I can successfully access the model files from python code without any auth errors.

Currently facing the same issue. Trying to load “tiiuae/falcon-180b-chat” via jupyter notebook. Already accepted the license T&C and was given access to the gated model but when I log in using hugginface_hub.notebook_login(new_session=True), I receive the 401 error while loading the tokenizer config file.

This is a new development for me. I used the same login method to download LLama-2 a few weeks back without a problem but with this new model I am facing this issue and haven’t been able to resolve it. Any help would be appreciated.

All of a sudden started getting the 401 Error for loading a model from a HF Space.

Login is successful, HF_TOKEN env is set, I’m also passing the token into the model loading function as an arg… From all my envs (local, cloud, browser) the model is available.

Token will not been saved to git credential helper. Pass `add_to_git_credential=True` if you want to set the git credential as well.
Token is valid.
Your token has been saved to /home/user/.cache/huggingface/token
Login successful
Repository Not Found for url:

but the URL works when opened from browser. Loading the model also works from other environments.

1 Like

@aoliveira He doesnt care anymore because got problem solved. Selfish

I’ve been getting this error too. I have my access token set, but still getting unauthorized. I tried a basic api call following the documentation on the website but it always fails. Does anyone have a solution for this?

*Update: It works on one device, but doesn’t work on another. I can’t authenticate any requests on my pc but all the requests authenticate properly on my laptop.

Forgive my ignorance, as I am new to this, but why do I need to talk to HF when my model is local? I’m trying to train a new model, and my args point to all local files, so what is the point of connecting to HF when there is nothing I need or want from there? Is there a flag or something to say I am running locally?

hi @ntisithoj,

Could you please share more about, what model and libraries are you trying to run offline?
Please have a look on how to run transformers offline. it might be helpful. offline-mode

Thank you for the replay. Sadly, I can not remember the details. I know I was building a model from local images as a test of an animated character, and I was using off-the-self tools like dreambooth cli. So, while I can’t give you any real details, I gather from your questions that I could be running offline 100%, given the correct setup?

When i am trying to push model it showing error??

Using the downloader from text-generation-webui works by copying the Access token from into bash command line like:

export HF_TOKEN=hf_abcdefg1234567ISeZLaASlRFURhTAAOs && python3 WhiteRabbitNeo/WhiteRabbitNeo-13B-v1
1 Like

Before sharing a model to the Hub, you will need your Hugging Face credentials. If you have access to a terminal, run the following command in the virtual environment where :hugs: Transformers is installed. This will store your access token in your Hugging Face cache folder (~/.cache/ by default):


huggingface-cli login

1 Like