Loading Llama 3

Hello everyone,

I am encountering an error that has been posted several times on different forums, but none of the proposed solutions work in my case. So, I am reposting the error:

OSError: We couldn’t connect to ‘https://huggingface.co’ to load this file, couldn’t find it in the cached files and it looks like meta-llama/Meta-Llama-3-8B-Instruct is not the path to a directory containing a file named config.json.
Checkout your internet connection or see how to run the library in offline mode at ‘Installation’.

The code causing the error is as follows:

Load the model and tokenizer from Hugging Face

model_name = “meta-llama/Meta-Llama-3-8B-Instruct”
tokenizer = AutoTokenizer.from_pretrained(model_name, token=hf_token)
model = AutoModelForSequenceClassification.from_pretrained(model_name, token=hf_token)

My connection is working correctly; could there be some connection parameters causing the issue?

It’s problematic because I absolutely need to be able to infer with this model.

I am available for any further details you might need.

Thank you very much in advance,

Ok that’s because I was using a “fine grained” token whereas using the “write” one.

4 Likes

This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.