HTTPError: 404 Client Error: Not Found for url: Error while AutoModelForCausalLM.from_pretrained from public repo "lostdrifter/lora-llama-spell-corrector"

I have a public repo lostdrifter/lora-llama-spell-corrector at main. It is Llama fine tuned using PEFT Lora Adapter

When I try to execute following code

import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer

peft_model_id = “lostdrifter/lora-llama-spell-corrector”
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map=‘auto’)

I get following error

HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/None/resolve/main/config.json

My transformer version is 4.28.0.dev0

When I tried to print config, it seems like the base_model_name_or_path is None

PeftConfig(peft_type=‘LORA’, base_model_name_or_path=None, task_type=‘CAUSAL_LM’, inference_mode=True)

Would appreciate any help here.

Seems like “base_model_name_or_path”: “facebook/opt-6.7b” was not set in my repo’s adapter_config.json … It worked after I fixed this.

Closing the task