ValueError: Can't find 'adapter_config.json' at 'foobar8675/bloom-7b1-lora-tagger'
Traceback:
File "/home/user/.local/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script
exec(code, module.__dict__)
File "/home/user/app/app.py", line 23, in <module>
config = PeftConfig.from_pretrained(peft_model_id)
File "/home/user/.local/lib/python3.10/site-packages/peft/utils/config.py", line 108, in from_pretrained
raise ValueError(f"Can't find '{CONFIG_NAME}' at '{pretrained_model_name_or_pa
the model at foobar8675/bloom-7b1-lora-tagger is public and I verified the adapter_config.json is on the model. iām quite confused as to why this is happening and any help is appreciated.
Hi, I have the same problem with the model bofenghuang/vigogne-chat-7b. While exporting it, it throws an ValueError: Canāt find āadapter_config.jsonā even if the file exists.
For me the issue was authentication. Running through the stacktrace, if you see something like āInvalid username and passwordā just after a link to the adapter_config.json file, itās likely you have the same issue too.
To fix, youāll need to login to the hub, which can be done programatically using the following snippet:
from huggingface_hub import login
import os
access_token = os.environ["HUGGING_FACE_HUB_TOKEN"]
login(token=access_token)
For whom having this issue while doing SFT, in my case it was because I omitted get_peft_model before SFTTrainer. Per HF docs, get_peft_model wraps base model and peft_config into PeftModel. So if you donāt do get_peft_model, model would be just AutoCasualLM not AutoPeftCasualLM. Therefore, when to do model.push_to_hub, the files being uploaded will be model.safetensors and config.json, not adapter_config.json and adapter_model.safetensors.