I fine-tuned a falcon-7b model and called
SFTTrainer from huggingface’s
trainer.model.push_to_hub('hub_name') pushes three files to the hugginface repository–a
adapter_model.bin. I’m having trouble finding any documentation that describes how to use these file formats.
At first I found this huggingface page, “Using Adapter Transformers at Hugging Face.)”, and it says to use the class “AutoModelWithHeads”. However, this class won’t load after downloading the
adapter-transforms pip package on Google Colab, I think it’s deprecated? Also, calling
adapter_name = model.load_adapter(ADAPTER)
model.active_adapters = adapter_name
with AutoModel results in this error:
AttributeError: ‘RWForCausalLM’ object has no attribute ‘load_adapter’
How can you apply adapter_model.bin files? My code was based on the official Falcon-7b huggingface tutorial, where they link a Google Colab notebook on that page.