Hello,
I’ve fine-tuned models for llama3.1, gemma2 and mistral7b. The files are in my local directory and have a valid absolute path. When I try to load the model using both the local and absolute path of the folders containing all of the details of the fine-tuned models, the huggingface library instead redownloads all the shards.
I’ve gotten this to work before without this situation, and the model loading the local fine-tuned model perfectly fine. Am I doing something wrong here?
This is the code that I use to load the locally fine-tuned model
model = AutoModelForCausalLM.from_pretrained(
"/home/ryim/NEJM-AI/zeroShot/models/zeroShot_llama31_sft/", device_map="cuda", local_files_only=True)
tokenizer = AutoTokenizer.from_pretrained("/home/ryim/NEJM-AI/zeroShot/models/zeroShot_llama31_sft/", local_files_only=True)
Thank you