LLAMA-2 Download issues

Hello everyone,
I have been trying to use Llama 2 with the following code:

from langchain.llms import HuggingFaceHub 

google_kwargs = {'temperature':0.6, 'max_length': 64}
llm = HuggingFaceHub(repo_id='meta-llama/Llama-2-7b-chat', huggingfacehub_api_token=hugging_face_token, model_kwargs=google_kwargs) 
name = llm('I want to open an Italian restaurant, suggest me a name for this')
print(name)

By the way, every time I got the following error:

ValueError: Error raised by inference API: meta-llama/Llama-2-7b-chat does not appear to have a file named config.json. Checkout ‘https://huggingface.co/meta-llama/Llama-2-7b-chat/2abbae1937452ebd4eecb63113a87feacd6f13ac’ for available files.

I have been authorized by both META and HuggingFace, but I cannot do anything.
The problem is the same when I use the meta-llama/Llama-2-7b-chat-hf version, in that case it says that I must obtain the PRO version.
Is there a way to fix it?
Many thanks.

Llama 2 doesn’t seem to support the Inference API, so you may have to pay to use this specific version of the model. Do you already have a PRO account? Otherwise, I don’t believe there’s anyway to bypass the issue

And what to do with the normal one, without hf?
Many thanks.

You’ll have to login with hf_token before you can access gated models, and this is assuming that you’ve been granted access to those gated models.

1 Like

When I try the access from the command line it gives me problems

from huggingface_hub import snapshot_download, login

login and download

login(token = “hf_token”)

snapshot_download(repo_id=“repo-id”, local_dir= dir_path")

Use this code in a python file and run it. This should mostly work.

Sorry for the late reply.

2 Likes

Hello, many thanks for your reply. Can I ask you how can import it once I did the download?

I did it and the code returns me the same .json missing file error.

is this the type of thing we could just fix by manualy adding a config?