I got approved to test llama2 and generated an access token. Technically the email approval was for “meta-llama/llama-2-13b-chat”, however I really want “meta-llama/llama-2-7b-hf”. Both model cards say “Gated Model: You have been granted access to this model” though.
When I try to download the llama-2-7b-hf model, I get a 401 access denied. When I try to download llama-2-13b-chat I get an error that config.json is missing…but otherwise seems to work.
To me this means, my token was only approved for the single llama2 model and not the others.
How do I get access to llama-2-7b-hf? There’s no button to request access as the website thinks I’m already approved?
I’m adding the token to the model aquisition line as follows:
model = transformers.AutoModelForCausalLM.from_pretrained("meta-llama/llama-2-7b-hf", torch_dtype="auto", trust_remote_code=True, token="hf_xxx")```
I’ve also tried putting these lines above the model acquision line and pasting the token at the prompt:
from huggingface_hub import login
login()
However, I still get an error.