Hi,
I am trying to load tokenizer for llama2 using AutoTokenizer but I am facing this issue
âââOSError: Canât load tokenizer for âmeta-llama/Llama-2-7b-hfâ. If you were trying to load it from âModels - Hugging Faceâ, make sure you donât have a local directory with the same name. Otherwise, make sure âmeta-llama/Llama-2-7b-hfâ is the correct path to a directory containing all relevant files for a LlamaTokenizer tokenizer.
âââ
any help is appreciated!
Thanks
1 Like
Hello, have you solved this problem,Iâm having the same issue too?
yes, we need to pass access_token and proxy(if applicable) for tokenizers as well
5 Likes
tokenizer = AutoTokenizer.from_pretrained(model_id, token=access_token)
access_token can be generated from huggingface UI
2 Likes
hi, what did you put in the variable model_id?? i am trying it on colab and facing this same issue
model_id should be the name from hf model repository
for example : meta-llama/Meta-Llama-3-8B
1 Like