Cannot access gated repo Llama-2-7b-hf

Premise: I have been granted the access to every Llama model (- Gated model You have been granted access to this model -)

I’m trying to train a binary text classificator but as soon as I start the training with meta-llama/Llama-2-7b-hf model, the space pauses with the following error:

ERROR train has failed due to an exception:
ERROR Traceback (most recent call last):
File “/app/env/lib/python3.10/site-packages/huggingface_hub/utils/”, line 261, in hf_raise_for_status
File “/app/env/lib/python3.10/site-packages/requests/”, line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url:

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “/app/env/lib/python3.10/site-packages/transformers/utils/”, line 430, in cached_file
resolved_file = hf_hub_download(
File “/app/env/lib/python3.10/site-packages/huggingface_hub/utils/”, line 118, in _inner_fn
return fn(*args, **kwargs)
File “/app/env/lib/python3.10/site-packages/huggingface_hub/”, line 1346, in hf_hub_download
raise head_call_error
File “/app/env/lib/python3.10/site-packages/huggingface_hub/”, line 1232, in hf_hub_download
metadata = get_hf_file_metadata(
File “/app/env/lib/python3.10/site-packages/huggingface_hub/utils/”, line 118, in _inner_fn
return fn(*args, **kwargs)
File “/app/env/lib/python3.10/site-packages/huggingface_hub/”, line 1608, in get_hf_file_metadata
File “/app/env/lib/python3.10/site-packages/huggingface_hub/utils/”, line 277, in hf_raise_for_status
raise GatedRepoError(message, response) from e
huggingface_hub.utils._errors.GatedRepoError: 401 Client Error. (Request ID: Root=1-65679e59-1b9ceafa62ea575641e21697;cf8ac436-b333-4f2c-a704-8b64d5d103b9)

Cannot access gated repo for url
Repo model meta-llama/Llama-2-7b-hf is gated. You must be authenticated to access it.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “/app/src/autotrain/”, line 280, in wrapper
return func(*args, **kwargs)
File “/app/src/autotrain/trainers/text_classification/”, line 87, in train
model_config = AutoConfig.from_pretrained(config.model, num_labels=num_classes)
File “/app/env/lib/python3.10/site-packages/transformers/models/auto/”, line 1048, in from_pretrained
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File “/app/env/lib/python3.10/site-packages/transformers/”, line 622, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File “/app/env/lib/python3.10/site-packages/transformers/”, line 677, in _get_config_dict
resolved_config_file = cached_file(
File “/app/env/lib/python3.10/site-packages/transformers/utils/”, line 445, in cached_file
raise EnvironmentError(
OSError: You are trying to access a gated repo.
Make sure to request access at huggingface. co/meta-llama/Llama-2-7b-hf and pass a token having permission to this repo either by logging in with huggingface-cli login or by passing token=<your_token>.

INFO Pausing space…

Were you able to resolve this?

Unfortunately not, I used the sharded version TinyPixel/Llama-2-7B-bf16-sharded

I am facing the same problem. Did anyone figure out how to solve it?

you should run this

huggingface_hub import notebook_login

and then


paste your token

hope it should resolve


Login to your hugging face, go to settings.
Get the Access token.

Run this code:
from huggingface_hub import notebook_login

It will ask for the token, paste it and you will be able to access it In Sha Allah :slight_smile:

1 Like

this works for me

and if you are NOT in a NOTEBOOK

from huggingface_hub import login

this works for me, tks😄