Unable to Access Gated Model meta-llama/Llama-3.2-1B Despite Approved Access

Hi Hugging Face Support Team,

I hope this message finds you well. I’m encountering an issue while trying to access the gated model meta-llama/Llama-3.2-1B. Despite having my access request approved, I am still receiving a 403 Forbidden error when attempting to download the model.


Details of the Issue:

  1. Model Name:
    meta-llama/Llama-3.2-1B

  2. Error Message:

    HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/meta-llama/Llama-3.2-1B/resolve/main/config.json
    

    The full traceback includes:

    OSError: You are trying to access a gated repo. Make sure to have access to it at https://huggingface.co/meta-llama/Llama-3.2-1B.
    403 Client Error. (Request ID: Root=1-67ef2363-42b58be57736a28811717ca5;f127327b-3d0a-4879-9332-7afaec78ec7d)
    
  3. Environment:

    • Platform: Google Colab (Free Tier)
    • Libraries Installed:
      • transformers: Latest version (pip install -U transformers)
      • huggingface_hub: Latest version (pip install -U huggingface_hub)
    • Authentication Method:
      • Logged in via huggingface-cli login and also tried passing the token explicitly in the code.
  4. Steps Taken So Far:

    • Verified that my access was granted on the model page: meta-llama/Llama-3.2-1B.
    • Generated a new Hugging Face token and used it in my script.
    • Cleared the cache directory (~/.cache/huggingface/) to ensure no corrupted files were causing the issue.
    • Tested with a public model (bert-base-uncased) to confirm my setup works correctly.
  5. Code Used:

    from transformers import AutoTokenizer
    
    tokenizer = AutoTokenizer.from_pretrained(
        'meta-llama/Llama-3.2-1B',
        trust_remote_code=True,
        token="my_huggingface_token_here"
    )
    
  6. Expected Behavior:
    The model files should download successfully since my access has been approved.

  7. Actual Behavior:
    The process fails with a 403 Forbidden error, indicating I do not have access to the repository.


Additional Information:

  • Hugging Face Username: zihad100123
  • Request ID from Error Message:
    Request ID: Root=1-67ef2363-42b58be57736a28811717ca5;f127327b-3d0a-4879-9332-7afaec78ec7d
    

Request for Assistance:

Could you please verify the following?

  1. Whether my access to meta-llama/Llama-3.2-1B has been fully granted.
  2. If there are any additional steps I need to take to authenticate or access the model.
  3. Whether there are any known issues with accessing this model in a Google Colab environment.

Any guidance or clarification would be greatly appreciated. Please let me know if you need further details from my side.

Thank you for your time and support!

Best regards,
Latifur Rahman Zihad
Hugging Face Username: zihad100123
Email: latifurrahmanzihad18@proton.me

1 Like

Possibly this case?

May be not that case.


As the picture shows in gated grouped collections model,It shows I got access but whenever I try it on colab it failed and showing above error messages.

1 Like

Hmm… Known Colab issue is this one.

It is not really free

1 Like

Try using this code. It works on Google colab for me:

from huggingface_hub import login

#your access token with read access 
hf_token = ""
login(token= hf_token)

#HF repo ID
repo_ID = "meta-llama/Llama-3.2-1B"

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained(
    repo_id,
    trust_remote_code=True,
    )

#the rest of your code 

Be sure your access token has read access or, it is a read token.

1 Like

my token is fine-grained .should I use a read token??

1 Like

Fine-grained is safer if you set it up properly, but it’s a hassle, so I usually use Read tokens.

I tried every types of tokens but not working

1 Like

Alhamdulillah, I figured out the problem. I had not given access to the contents of all the public gated repositories that I have access to.

now the problem is solved.

2 Likes

This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.